path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
docs/tutorials/bigquery.ipynb | ###Markdown
Copyright 2019 The TensorFlow IO Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
End to end example for BigQuery TensorFlow reader View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial shows how to use [BigQuery TensorFlow reader](https://github.com/tensorflow/io/tree/master/tensorflow_io/bigquery) for training neural network using the Keras sequential API. DatasetThis tutorial uses the [United States Census IncomeDataset](https://archive.ics.uci.edu/ml/datasets/census+income) provided by the[UC Irvine Machine LearningRepository](https://archive.ics.uci.edu/ml/index.php). This dataset containsinformation about people from a 1994 Census database, including age, education,marital status, occupation, and whether they make more than $50,000 a year. Setup Set up your GCP project**The following steps are required, regardless of your notebook environment.**1. [Select or create a GCP project.](https://console.cloud.google.com/cloud-resource-manager)2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the BigQuery Storage API](https://cloud.google.com/bigquery/docs/reference/storage/enabling_the_api)4. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.Note: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Install required Packages, and restart runtime
###Code
try:
# Use the Colab's preinstalled TensorFlow 2.x
%tensorflow_version 2.x
except:
pass
!pip install fastavro
!pip install tensorflow-io==0.9.0
!pip install google-cloud-bigquery-storage
###Output
_____no_output_____
###Markdown
Authenticate
###Code
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
###Output
_____no_output_____
###Markdown
Set your PROJECT ID
###Code
PROJECT_ID = "<YOUR PROJECT>" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
%env GCLOUD_PROJECT=$PROJECT_ID
###Output
_____no_output_____
###Markdown
Import Python libraries, define constants
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
from six.moves import urllib
import tempfile
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from google.api_core.exceptions import GoogleAPIError
LOCATION = 'us'
# Storage directory
DATA_DIR = os.path.join(tempfile.gettempdir(), 'census_data')
# Download options.
DATA_URL = 'https://storage.googleapis.com/cloud-samples-data/ml-engine/census/data'
TRAINING_FILE = 'adult.data.csv'
EVAL_FILE = 'adult.test.csv'
TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE)
EVAL_URL = '%s/%s' % (DATA_URL, EVAL_FILE)
DATASET_ID = 'census_dataset'
TRAINING_TABLE_ID = 'census_training_table'
EVAL_TABLE_ID = 'census_eval_table'
CSV_SCHEMA = [
bigquery.SchemaField("age", "FLOAT64"),
bigquery.SchemaField("workclass", "STRING"),
bigquery.SchemaField("fnlwgt", "FLOAT64"),
bigquery.SchemaField("education", "STRING"),
bigquery.SchemaField("education_num", "FLOAT64"),
bigquery.SchemaField("marital_status", "STRING"),
bigquery.SchemaField("occupation", "STRING"),
bigquery.SchemaField("relationship", "STRING"),
bigquery.SchemaField("race", "STRING"),
bigquery.SchemaField("gender", "STRING"),
bigquery.SchemaField("capital_gain", "FLOAT64"),
bigquery.SchemaField("capital_loss", "FLOAT64"),
bigquery.SchemaField("hours_per_week", "FLOAT64"),
bigquery.SchemaField("native_country", "STRING"),
bigquery.SchemaField("income_bracket", "STRING"),
]
UNUSED_COLUMNS = ["fnlwgt", "education_num"]
###Output
_____no_output_____
###Markdown
Import census data into BigQuery Define helper methods to load data into BigQuery
###Code
def create_bigquery_dataset_if_necessary(dataset_id):
# Construct a full Dataset object to send to the API.
client = bigquery.Client(project=PROJECT_ID)
dataset = bigquery.Dataset(bigquery.dataset.DatasetReference(PROJECT_ID, dataset_id))
dataset.location = LOCATION
try:
dataset = client.create_dataset(dataset) # API request
return True
except GoogleAPIError as err:
if err.code != 409: # http_client.CONFLICT
raise
return False
def load_data_into_bigquery(url, table_id):
create_bigquery_dataset_if_necessary(DATASET_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
table_ref = dataset_ref.table(table_id)
job_config = bigquery.LoadJobConfig()
job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE
job_config.source_format = bigquery.SourceFormat.CSV
job_config.schema = CSV_SCHEMA
load_job = client.load_table_from_uri(
url, table_ref, job_config=job_config
)
print("Starting job {}".format(load_job.job_id))
load_job.result() # Waits for table load to complete.
print("Job finished.")
destination_table = client.get_table(table_ref)
print("Loaded {} rows.".format(destination_table.num_rows))
###Output
_____no_output_____
###Markdown
Load Census data in BigQuery.
###Code
load_data_into_bigquery(TRAINING_URL, TRAINING_TABLE_ID)
load_data_into_bigquery(EVAL_URL, EVAL_TABLE_ID)
###Output
Starting job 2ceffef8-e6e4-44bb-9e86-3d97b0501187
Job finished.
Loaded 32561 rows.
Starting job bf66f1b3-2506-408b-9009-c19f4ae9f58a
Job finished.
Loaded 16278 rows.
###Markdown
Confirm that data was importedTODO: replace \ with your PROJECT_IDNote: --use_bqstorage_api will get data using BigQueryStorage API and will make sure that you are authorized to use it. Make sure that it is enabled for your project: https://cloud.google.com/bigquery/docs/reference/storage/enabling_the_api
###Code
%%bigquery --use_bqstorage_api
SELECT * FROM `<YOUR PROJECT>.census_dataset.census_training_table` LIMIT 5
###Output
_____no_output_____
###Markdown
Load census data in TensorFlow DataSet using BigQuery reader Read and transform cesnus data from BigQuery into TensorFlow DataSet
###Code
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
from tensorflow_io.bigquery import BigQueryClient
from tensorflow_io.bigquery import BigQueryReadSession
def transofrom_row(row_dict):
# Trim all string tensors
trimmed_dict = { column:
(tf.strings.strip(tensor) if tensor.dtype == 'string' else tensor)
for (column,tensor) in row_dict.items()
}
# Extract feature column
income_bracket = trimmed_dict.pop('income_bracket')
# Convert feature column to 0.0/1.0
income_bracket_float = tf.cond(tf.equal(tf.strings.strip(income_bracket), '>50K'),
lambda: tf.constant(1.0),
lambda: tf.constant(0.0))
return (trimmed_dict, income_bracket_float)
def read_bigquery(table_name):
tensorflow_io_bigquery_client = BigQueryClient()
read_session = tensorflow_io_bigquery_client.read_session(
"projects/" + PROJECT_ID,
PROJECT_ID, table_name, DATASET_ID,
list(field.name for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
list(dtypes.double if field.field_type == 'FLOAT64'
else dtypes.string for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
requested_streams=2)
dataset = read_session.parallel_read_rows()
transformed_ds = dataset.map (transofrom_row)
return transformed_ds
BATCH_SIZE = 32
training_ds = read_bigquery(TRAINING_TABLE_ID).shuffle(10000).batch(BATCH_SIZE)
eval_ds = read_bigquery(EVAL_TABLE_ID).batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Define feature columns
###Code
def get_categorical_feature_values(column):
query = 'SELECT DISTINCT TRIM({}) FROM `{}`.{}.{}'.format(column, PROJECT_ID, DATASET_ID, TRAINING_TABLE_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
job_config = bigquery.QueryJobConfig()
query_job = client.query(query, job_config=job_config)
result = query_job.to_dataframe()
return result.values[:,0]
from tensorflow import feature_column
feature_columns = []
# numeric cols
for header in ['capital_gain', 'capital_loss', 'hours_per_week']:
feature_columns.append(feature_column.numeric_column(header))
# categorical cols
for header in ['workclass', 'marital_status', 'occupation', 'relationship',
'race', 'native_country', 'education']:
categorical_feature = feature_column.categorical_column_with_vocabulary_list(
header, get_categorical_feature_values(header))
categorical_feature_one_hot = feature_column.indicator_column(categorical_feature)
feature_columns.append(categorical_feature_one_hot)
# bucketized cols
age = feature_column.numeric_column('age')
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Build and train model Build model
###Code
Dense = tf.keras.layers.Dense
model = tf.keras.Sequential(
[
feature_layer,
Dense(100, activation=tf.nn.relu, kernel_initializer='uniform'),
Dense(75, activation=tf.nn.relu),
Dense(50, activation=tf.nn.relu),
Dense(25, activation=tf.nn.relu),
Dense(1, activation=tf.nn.sigmoid)
])
# Compile Keras model
model.compile(
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train model
###Code
model.fit(training_ds, epochs=5)
###Output
WARNING:tensorflow:Layer sequential is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4276: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4331: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
Epoch 1/5
1018/1018 [==============================] - 17s 17ms/step - loss: 0.5985 - accuracy: 0.8105
Epoch 2/5
1018/1018 [==============================] - 10s 10ms/step - loss: 0.3670 - accuracy: 0.8324
Epoch 3/5
1018/1018 [==============================] - 11s 10ms/step - loss: 0.3487 - accuracy: 0.8393
Epoch 4/5
1018/1018 [==============================] - 11s 10ms/step - loss: 0.3398 - accuracy: 0.8435
Epoch 5/5
1018/1018 [==============================] - 11s 11ms/step - loss: 0.3377 - accuracy: 0.8455
###Markdown
Evaluate model Evaluate model
###Code
loss, accuracy = model.evaluate(eval_ds)
print("Accuracy", accuracy)
###Output
509/509 [==============================] - 8s 15ms/step - loss: 0.3338 - accuracy: 0.8398
Accuracy 0.8398452
###Markdown
Evaluate a couple of random samples
###Code
sample_x = {
'age' : np.array([56, 36]),
'workclass': np.array(['Local-gov', 'Private']),
'education': np.array(['Bachelors', 'Bachelors']),
'marital_status': np.array(['Married-civ-spouse', 'Married-civ-spouse']),
'occupation': np.array(['Tech-support', 'Other-service']),
'relationship': np.array(['Husband', 'Husband']),
'race': np.array(['White', 'Black']),
'gender': np.array(['Male', 'Male']),
'capital_gain': np.array([0, 7298]),
'capital_loss': np.array([0, 0]),
'hours_per_week': np.array([40, 36]),
'native_country': np.array(['United-States', 'United-States'])
}
model.predict(sample_x)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow IO Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
End to end example for BigQuery TensorFlow reader View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial shows how to use [BigQuery TensorFlow reader](https://github.com/tensorflow/io/tree/master/tensorflow_io/bigquery) for training neural network using the Keras sequential API. DatasetThis tutorial uses the [United States Census IncomeDataset](https://archive.ics.uci.edu/ml/datasets/census+income) provided by the[UC Irvine Machine LearningRepository](https://archive.ics.uci.edu/ml/index.php). This dataset containsinformation about people from a 1994 Census database, including age, education,marital status, occupation, and whether they make more than $50,000 a year. Setup Set up your GCP project**The following steps are required, regardless of your notebook environment.**1. [Select or create a GCP project.](https://console.cloud.google.com/cloud-resource-manager)2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the BigQuery Storage API](https://cloud.google.com/bigquery/docs/reference/storage/enabling_the_api)4. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.Note: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Install required Packages, and restart runtime
###Code
try:
# Use the Colab's preinstalled TensorFlow 2.x
%tensorflow_version 2.x
except:
pass
!pip install fastavro
!pip install tensorflow-io==0.9.0
!pip install google-cloud-bigquery-storage
###Output
Requirement already satisfied: google-cloud-bigquery-storage in /usr/local/lib/python3.7/dist-packages (1.1.0)
Requirement already satisfied: google-api-core[grpc]<2.0.0dev,>=1.14.0 in /usr/local/lib/python3.7/dist-packages (from google-cloud-bigquery-storage) (1.26.3)
Requirement already satisfied: packaging>=14.3 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (21.3)
Requirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (57.4.0)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (2.23.0)
Requirement already satisfied: google-auth<2.0dev,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (1.35.0)
Requirement already satisfied: protobuf>=3.12.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (3.17.3)
Requirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (2018.9)
Requirement already satisfied: six>=1.13.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (1.15.0)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (1.53.0)
Requirement already satisfied: grpcio<2.0dev,>=1.29.0 in /usr/local/lib/python3.7/dist-packages (from google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (1.42.0)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (4.2.4)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (4.8)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=14.3->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (3.0.6)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2.0dev,>=1.21.1->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (0.4.8)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (2021.10.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0dev,>=2.18.0->google-api-core[grpc]<2.0.0dev,>=1.14.0->google-cloud-bigquery-storage) (2.10)
###Markdown
Authenticate
###Code
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
###Output
Authenticated
###Markdown
Set your PROJECT ID
###Code
PROJECT_ID = "cmp-development" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
%env GCLOUD_PROJECT=$PROJECT_ID
###Output
Updated property [core/project].
env: GCLOUD_PROJECT=cmp-development
###Markdown
Import Python libraries, define constants
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
from six.moves import urllib
import tempfile
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from google.api_core.exceptions import GoogleAPIError
LOCATION = 'us'
# Storage directory
# DATA_DIR = os.path.join(tempfile.gettempdir(), 'census_data')
# Download options.
# DATA_URL = 'https://storage.googleapis.com/cloud-samples-data/ml-engine/census/data'
# TRAINING_FILE = 'adult.data.csv'
# EVAL_FILE = 'adult.test.csv'
# TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE)
# EVAL_URL = '%s/%s' % (DATA_URL, EVAL_FILE)
DATASET_ID = 'Preprocessing'
TRAINING_TABLE_ID = 'DatasetTrain'
# EVAL_TABLE_ID = 'census_eval_table'
CSV_SCHEMA = [
bigquery.SchemaField("age", "FLOAT64"),
bigquery.SchemaField("workclass", "STRING"),
bigquery.SchemaField("fnlwgt", "FLOAT64"),
bigquery.SchemaField("education", "STRING"),
bigquery.SchemaField("education_num", "FLOAT64"),
bigquery.SchemaField("marital_status", "STRING"),
bigquery.SchemaField("occupation", "STRING"),
bigquery.SchemaField("relationship", "STRING"),
bigquery.SchemaField("race", "STRING"),
bigquery.SchemaField("gender", "STRING"),
bigquery.SchemaField("capital_gain", "FLOAT64"),
bigquery.SchemaField("capital_loss", "FLOAT64"),
bigquery.SchemaField("hours_per_week", "FLOAT64"),
bigquery.SchemaField("native_country", "STRING"),
bigquery.SchemaField("income_bracket", "STRING"),
]
UNUSED_COLUMNS = ["fnlwgt", "education_num"]
###Output
_____no_output_____
###Markdown
Import census data into BigQuery Define helper methods to load data into BigQuery
###Code
def create_bigquery_dataset_if_necessary(dataset_id):
# Construct a full Dataset object to send to the API.
client = bigquery.Client(project=PROJECT_ID)
dataset = bigquery.Dataset(bigquery.dataset.DatasetReference(PROJECT_ID, dataset_id))
dataset.location = LOCATION
try:
dataset = client.create_dataset(dataset) # API request
return True
except GoogleAPIError as err:
if err.code != 409: # http_client.CONFLICT
raise
return False
def load_data_into_bigquery(url, table_id):
create_bigquery_dataset_if_necessary(DATASET_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
table_ref = dataset_ref.table(table_id)
job_config = bigquery.LoadJobConfig()
job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE
job_config.source_format = bigquery.SourceFormat.CSV
job_config.schema = CSV_SCHEMA
load_job = client.load_table_from_uri(
url, table_ref, job_config=job_config
)
print("Starting job {}".format(load_job.job_id))
load_job.result() # Waits for table load to complete.
print("Job finished.")
destination_table = client.get_table(table_ref)
print("Loaded {} rows.".format(destination_table.num_rows))
###Output
_____no_output_____
###Markdown
Load Census data in BigQuery.
###Code
# load_data_into_bigquery(TRAINING_URL, TRAINING_TABLE_ID)
# load_data_into_bigquery(EVAL_URL, EVAL_TABLE_ID)
###Output
_____no_output_____
###Markdown
Confirm that data was importedTODO: replace \ with your PROJECT_IDNote: --use_bqstorage_api will get data using BigQueryStorage API and will make sure that you are authorized to use it. Make sure that it is enabled for your project: https://cloud.google.com/bigquery/docs/reference/storage/enabling_the_api
###Code
%%bigquery --use_bqstorage_api
SELECT * FROM `cmp-development.Preprocessing.DatasetTrain` LIMIT 5
!python --version
###Output
Python 3.7.12
###Markdown
Load census data in TensorFlow DataSet using BigQuery reader Read and transform cesnus data from BigQuery into TensorFlow DataSet
###Code
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
from tensorflow_io.bigquery import BigQueryClient
from tensorflow_io.bigquery import BigQueryReadSession
def transform_row(row_dict):
# Trim all string tensors
trimmed_dict = { column:
(tf.strings.strip(tensor) if tensor.dtype == 'string' else tensor)
for (column,tensor) in row_dict.items()
}
# Extract feature column
income_bracket = trimmed_dict.pop('income_bracket')
# Convert feature column to 0.0/1.0
income_bracket_float = tf.cond(tf.equal(tf.strings.strip(income_bracket), '>50K'),
lambda: tf.constant(1.0),
lambda: tf.constant(0.0))
return (trimmed_dict, income_bracket_float)
def read_bigquery(table_name):
tensorflow_io_bigquery_client = BigQueryClient()
read_session = tensorflow_io_bigquery_client.read_session(
"projects/" + PROJECT_ID,
PROJECT_ID, table_name, DATASET_ID,
list(field.name for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
list(dtypes.double if field.field_type == 'FLOAT64'
else dtypes.string for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
requested_streams=2)
dataset = read_session.parallel_read_rows()
transformed_ds = dataset.map(transform_row)
return transformed_ds
BATCH_SIZE = 32
TRAINING_TABLE_ID='DatasetTrain'
training_ds = read_bigquery(TRAINING_TABLE_ID).shuffle(10000).batch(BATCH_SIZE)
# eval_ds = read_bigquery(EVAL_TABLE_ID).batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Define feature columns
###Code
def get_categorical_feature_values(column):
query = 'SELECT DISTINCT TRIM({}) FROM `{}`.{}.{}'.format(column, PROJECT_ID, DATASET_ID, TRAINING_TABLE_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
job_config = bigquery.QueryJobConfig()
query_job = client.query(query, job_config=job_config)
result = query_job.to_dataframe()
return result.values[:,0]
from tensorflow import feature_column
feature_columns = []
# numeric cols
for header in ['capital_gain', 'capital_loss', 'hours_per_week']:
feature_columns.append(feature_column.numeric_column(header))
# categorical cols
for header in ['workclass', 'marital_status', 'occupation', 'relationship',
'race', 'native_country', 'education']:
categorical_feature = feature_column.categorical_column_with_vocabulary_list(
header, get_categorical_feature_values(header))
categorical_feature_one_hot = feature_column.indicator_column(categorical_feature)
feature_columns.append(categorical_feature_one_hot)
# bucketized cols
age = feature_column.numeric_column('age')
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Build and train model Build model
###Code
Dense = tf.keras.layers.Dense
model = tf.keras.Sequential(
[
feature_layer,
Dense(100, activation=tf.nn.relu, kernel_initializer='uniform'),
Dense(75, activation=tf.nn.relu),
Dense(50, activation=tf.nn.relu),
Dense(25, activation=tf.nn.relu),
Dense(1, activation=tf.nn.sigmoid)
])
# Compile Keras model
model.compile(
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train model
###Code
model.fit(training_ds, epochs=5)
###Output
WARNING:tensorflow:Layer sequential is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4276: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4331: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
Epoch 1/5
1018/1018 [==============================] - 14s 14ms/step - loss: 0.4736 - accuracy: 0.8137
Epoch 2/5
1018/1018 [==============================] - 8s 8ms/step - loss: 0.3564 - accuracy: 0.8336
Epoch 3/5
1018/1018 [==============================] - 8s 8ms/step - loss: 0.3488 - accuracy: 0.8386
Epoch 4/5
1018/1018 [==============================] - 8s 8ms/step - loss: 0.3559 - accuracy: 0.8392
Epoch 5/5
1018/1018 [==============================] - 9s 9ms/step - loss: 0.3400 - accuracy: 0.8442
###Markdown
Evaluate model Evaluate model
###Code
loss, accuracy = model.evaluate(eval_ds)
print("Accuracy", accuracy)
###Output
509/509 [==============================] - 6s 12ms/step - loss: 0.3555 - accuracy: 0.8249
Accuracy 0.8249171
###Markdown
Evaluate a couple of random samples
###Code
sample_x = {
'age' : np.array([56, 36]),
'workclass': np.array(['Local-gov', 'Private']),
'education': np.array(['Bachelors', 'Bachelors']),
'marital_status': np.array(['Married-civ-spouse', 'Married-civ-spouse']),
'occupation': np.array(['Tech-support', 'Other-service']),
'relationship': np.array(['Husband', 'Husband']),
'race': np.array(['White', 'Black']),
'gender': np.array(['Male', 'Male']),
'capital_gain': np.array([0, 7298]),
'capital_loss': np.array([0, 0]),
'hours_per_week': np.array([40, 36]),
'native_country': np.array(['United-States', 'United-States'])
}
model.predict(sample_x)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow IO Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
End to end example for BigQuery TensorFlow reader View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial shows how to use [BigQuery TensorFlow reader](https://github.com/tensorflow/io/tree/master/tensorflow_io/bigquery) for training neural network using the Keras sequential API. DatasetThis tutorial uses the [United States Census IncomeDataset](https://archive.ics.uci.edu/ml/datasets/census+income) provided by the[UC Irvine Machine LearningRepository](https://archive.ics.uci.edu/ml/index.php). This dataset containsinformation about people from a 1994 Census database, including age, education,marital status, occupation, and whether they make more than $50,000 a year. Setup Set up your GCP project**The following steps are required, regardless of your notebook environment.**1. [Select or create a GCP project.](https://console.cloud.google.com/cloud-resource-manager)2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the BigQuery Storage API](https://cloud.google.com/bigquery/docs/reference/storage/enabling_the_api)4. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.Note: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Install required Packages, and restart runtime
###Code
try:
# Use the Colab's preinstalled TensorFlow 2.x
%tensorflow_version 2.x
except:
pass
!pip install fastavro
!pip install tensorflow-io==0.9.0
!pip install google-cloud-bigquery-storage
###Output
_____no_output_____
###Markdown
Authenticate
###Code
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
###Output
_____no_output_____
###Markdown
Set your PROJECT ID
###Code
PROJECT_ID = "<YOUR PROJECT>" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
%env GCLOUD_PROJECT=$PROJECT_ID
###Output
_____no_output_____
###Markdown
Import Python libraries, define constants
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
from six.moves import urllib
import tempfile
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from google.api_core.exceptions import GoogleAPIError
LOCATION = 'us'
# Storage directory
DATA_DIR = os.path.join(tempfile.gettempdir(), 'census_data')
# Download options.
DATA_URL = 'https://storage.googleapis.com/cloud-samples-data/ml-engine/census/data'
TRAINING_FILE = 'adult.data.csv'
EVAL_FILE = 'adult.test.csv'
TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE)
EVAL_URL = '%s/%s' % (DATA_URL, EVAL_FILE)
DATASET_ID = 'census_dataset'
TRAINING_TABLE_ID = 'census_training_table'
EVAL_TABLE_ID = 'census_eval_table'
CSV_SCHEMA = [
bigquery.SchemaField("age", "FLOAT64"),
bigquery.SchemaField("workclass", "STRING"),
bigquery.SchemaField("fnlwgt", "FLOAT64"),
bigquery.SchemaField("education", "STRING"),
bigquery.SchemaField("education_num", "FLOAT64"),
bigquery.SchemaField("marital_status", "STRING"),
bigquery.SchemaField("occupation", "STRING"),
bigquery.SchemaField("relationship", "STRING"),
bigquery.SchemaField("race", "STRING"),
bigquery.SchemaField("gender", "STRING"),
bigquery.SchemaField("capital_gain", "FLOAT64"),
bigquery.SchemaField("capital_loss", "FLOAT64"),
bigquery.SchemaField("hours_per_week", "FLOAT64"),
bigquery.SchemaField("native_country", "STRING"),
bigquery.SchemaField("income_bracket", "STRING"),
]
UNUSED_COLUMNS = ["fnlwgt", "education_num"]
###Output
_____no_output_____
###Markdown
Import census data into BigQuery Define helper methods to load data into BigQuery
###Code
def create_bigquery_dataset_if_necessary(dataset_id):
# Construct a full Dataset object to send to the API.
client = bigquery.Client(project=PROJECT_ID)
dataset = bigquery.Dataset(bigquery.dataset.DatasetReference(PROJECT_ID, dataset_id))
dataset.location = LOCATION
try:
dataset = client.create_dataset(dataset) # API request
return True
except GoogleAPIError as err:
if err.code != 409: # http_client.CONFLICT
raise
return False
def load_data_into_bigquery(url, table_id):
create_bigquery_dataset_if_necessary(DATASET_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
table_ref = dataset_ref.table(table_id)
job_config = bigquery.LoadJobConfig()
job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE
job_config.source_format = bigquery.SourceFormat.CSV
job_config.schema = CSV_SCHEMA
load_job = client.load_table_from_uri(
url, table_ref, job_config=job_config
)
print("Starting job {}".format(load_job.job_id))
load_job.result() # Waits for table load to complete.
print("Job finished.")
destination_table = client.get_table(table_ref)
print("Loaded {} rows.".format(destination_table.num_rows))
###Output
_____no_output_____
###Markdown
Load Census data in BigQuery.
###Code
load_data_into_bigquery(TRAINING_URL, TRAINING_TABLE_ID)
load_data_into_bigquery(EVAL_URL, EVAL_TABLE_ID)
###Output
Starting job 2ceffef8-e6e4-44bb-9e86-3d97b0501187
Job finished.
Loaded 32561 rows.
Starting job bf66f1b3-2506-408b-9009-c19f4ae9f58a
Job finished.
Loaded 16278 rows.
###Markdown
Confirm that data was importedTODO: replace \ with your PROJECT_IDNote: --use_bqstorage_api will get data using BigQueryStorage API and will make sure that you are authorized to use it. Make sure that it is enabled for your project: https://cloud.google.com/bigquery/docs/reference/storage/enabling_the_api
###Code
%%bigquery --use_bqstorage_api
SELECT * FROM `<YOUR PROJECT>.census_dataset.census_training_table` LIMIT 5
###Output
_____no_output_____
###Markdown
Load census data in TensorFlow DataSet using BigQuery reader Read and transform cesnus data from BigQuery into TensorFlow DataSet
###Code
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
from tensorflow_io.bigquery import BigQueryClient
from tensorflow_io.bigquery import BigQueryReadSession
def transofrom_row(row_dict):
# Trim all string tensors
trimmed_dict = { column:
(tf.strings.strip(tensor) if tensor.dtype == 'string' else tensor)
for (column,tensor) in row_dict.items()
}
# Extract feature column
income_bracket = trimmed_dict.pop('income_bracket')
# Convert feature column to 0.0/1.0
income_bracket_float = tf.cond(tf.equal(tf.strings.strip(income_bracket), '>50K'),
lambda: tf.constant(1.0),
lambda: tf.constant(0.0))
return (trimmed_dict, income_bracket_float)
def read_bigquery(table_name):
tensorflow_io_bigquery_client = BigQueryClient()
read_session = tensorflow_io_bigquery_client.read_session(
"projects/" + PROJECT_ID,
PROJECT_ID, table_name, DATASET_ID,
list(field.name for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
list(dtypes.double if field.field_type == 'FLOAT64'
else dtypes.string for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
requested_streams=2)
dataset = read_session.parallel_read_rows()
transformed_ds = dataset.map (transofrom_row)
return transformed_ds
BATCH_SIZE = 32
training_ds = read_bigquery(TRAINING_TABLE_ID).shuffle(10000).batch(BATCH_SIZE)
eval_ds = read_bigquery(EVAL_TABLE_ID).batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Define feature columns
###Code
def get_categorical_feature_values(column):
query = 'SELECT DISTINCT TRIM({}) FROM `{}`.{}.{}'.format(column, PROJECT_ID, DATASET_ID, TRAINING_TABLE_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
job_config = bigquery.QueryJobConfig()
query_job = client.query(query, job_config=job_config)
result = query_job.to_dataframe()
return result.values[:,0]
from tensorflow import feature_column
feature_columns = []
# numeric cols
for header in ['capital_gain', 'capital_loss', 'hours_per_week']:
feature_columns.append(feature_column.numeric_column(header))
# categorical cols
for header in ['workclass', 'marital_status', 'occupation', 'relationship',
'race', 'native_country', 'education']:
categorical_feature = feature_column.categorical_column_with_vocabulary_list(
header, get_categorical_feature_values(header))
categorical_feature_one_hot = feature_column.indicator_column(categorical_feature)
feature_columns.append(categorical_feature_one_hot)
# bucketized cols
age = feature_column.numeric_column('age')
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Build and train model Build model
###Code
Dense = tf.keras.layers.Dense
model = tf.keras.Sequential(
[
feature_layer,
Dense(100, activation=tf.nn.relu, kernel_initializer='uniform'),
Dense(75, activation=tf.nn.relu),
Dense(50, activation=tf.nn.relu),
Dense(25, activation=tf.nn.relu),
Dense(1, activation=tf.nn.sigmoid)
])
# Compile Keras model
model.compile(
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train model
###Code
model.fit(training_ds, epochs=5)
###Output
WARNING:tensorflow:Layer sequential is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4276: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4331: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
Epoch 1/5
1018/1018 [==============================] - 17s 17ms/step - loss: 0.5985 - accuracy: 0.8105
Epoch 2/5
1018/1018 [==============================] - 10s 10ms/step - loss: 0.3670 - accuracy: 0.8324
Epoch 3/5
1018/1018 [==============================] - 11s 10ms/step - loss: 0.3487 - accuracy: 0.8393
Epoch 4/5
1018/1018 [==============================] - 11s 10ms/step - loss: 0.3398 - accuracy: 0.8435
Epoch 5/5
1018/1018 [==============================] - 11s 11ms/step - loss: 0.3377 - accuracy: 0.8455
###Markdown
Evaluate model Evaluate model
###Code
loss, accuracy = model.evaluate(eval_ds)
print("Accuracy", accuracy)
###Output
509/509 [==============================] - 8s 15ms/step - loss: 0.3338 - accuracy: 0.8398
Accuracy 0.8398452
###Markdown
Evaluate a couple of random samples
###Code
sample_x = {
'age' : np.array([56, 36]),
'workclass': np.array(['Local-gov', 'Private']),
'education': np.array(['Bachelors', 'Bachelors']),
'marital_status': np.array(['Married-civ-spouse', 'Married-civ-spouse']),
'occupation': np.array(['Tech-support', 'Other-service']),
'relationship': np.array(['Husband', 'Husband']),
'race': np.array(['White', 'Black']),
'gender': np.array(['Male', 'Male']),
'capital_gain': np.array([0, 7298]),
'capital_loss': np.array([0, 0]),
'hours_per_week': np.array([40, 36]),
'native_country': np.array(['United-States', 'United-States'])
}
model.predict(sample_x)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow IO Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
End to end example for BigQuery TensorFlow reader View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewThis tutorial shows how to use [BigQuery TensorFlow reader](https://github.com/tensorflow/io/tree/master/tensorflow_io/bigquery) for training neural network using the Keras sequential API. DatasetThis tutorial uses the [United States Census IncomeDataset](https://archive.ics.uci.edu/ml/datasets/census+income) provided by the[UC Irvine Machine LearningRepository](https://archive.ics.uci.edu/ml/index.php). This dataset containsinformation about people from a 1994 Census database, including age, education,marital status, occupation, and whether they make more than $50,000 a year. Setup Set up your GCP project**The following steps are required, regardless of your notebook environment.**1. [Select or create a GCP project.](https://console.cloud.google.com/cloud-resource-manager)2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the BigQuery Storage API](https://cloud.google.com/bigquery/docs/reference/storage/enabling_the_api)4. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.Note: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Install required Packages, and restart runtime
###Code
!pip install fastavro
!pip install tensorflow-io==0.9.0
!pip install google-cloud-bigquery-storage
###Output
_____no_output_____
###Markdown
Authenticate
###Code
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
###Output
_____no_output_____
###Markdown
Set your PROJECT ID
###Code
PROJECT_ID = "<YOUR PROJECT>" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
%env GCLOUD_PROJECT=$PROJECT_ID
###Output
_____no_output_____
###Markdown
Import Python libraries, define constants
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import os
from six.moves import urllib
import tempfile
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from google.api_core.exceptions import GoogleAPIError
LOCATION = 'us'
# Storage directory
DATA_DIR = os.path.join(tempfile.gettempdir(), 'census_data')
# Download options.
DATA_URL = 'https://storage.googleapis.com/cloud-samples-data/ml-engine/census/data'
TRAINING_FILE = 'adult.data.csv'
EVAL_FILE = 'adult.test.csv'
TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE)
EVAL_URL = '%s/%s' % (DATA_URL, EVAL_FILE)
DATASET_ID = 'census_dataset'
TRAINING_TABLE_ID = 'census_training_table'
EVAL_TABLE_ID = 'census_eval_table'
CSV_SCHEMA = [
bigquery.SchemaField("age", "FLOAT64"),
bigquery.SchemaField("workclass", "STRING"),
bigquery.SchemaField("fnlwgt", "FLOAT64"),
bigquery.SchemaField("education", "STRING"),
bigquery.SchemaField("education_num", "FLOAT64"),
bigquery.SchemaField("marital_status", "STRING"),
bigquery.SchemaField("occupation", "STRING"),
bigquery.SchemaField("relationship", "STRING"),
bigquery.SchemaField("race", "STRING"),
bigquery.SchemaField("gender", "STRING"),
bigquery.SchemaField("capital_gain", "FLOAT64"),
bigquery.SchemaField("capital_loss", "FLOAT64"),
bigquery.SchemaField("hours_per_week", "FLOAT64"),
bigquery.SchemaField("native_country", "STRING"),
bigquery.SchemaField("income_bracket", "STRING"),
]
UNUSED_COLUMNS = ["fnlwgt", "education_num"]
###Output
_____no_output_____
###Markdown
Import census data into BigQuery Define helper methods to load data into BigQuery
###Code
def create_bigquery_dataset_if_necessary(dataset_id):
# Construct a full Dataset object to send to the API.
client = bigquery.Client(project=PROJECT_ID)
dataset = bigquery.Dataset(bigquery.dataset.DatasetReference(PROJECT_ID, dataset_id))
dataset.location = LOCATION
try:
dataset = client.create_dataset(dataset) # API request
return True
except GoogleAPIError as err:
if err.code != 409: # http_client.CONFLICT
raise
return False
def load_data_into_bigquery(url, table_id):
create_bigquery_dataset_if_necessary(DATASET_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
table_ref = dataset_ref.table(table_id)
job_config = bigquery.LoadJobConfig()
job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE
job_config.source_format = bigquery.SourceFormat.CSV
job_config.schema = CSV_SCHEMA
load_job = client.load_table_from_uri(
url, table_ref, job_config=job_config
)
print("Starting job {}".format(load_job.job_id))
load_job.result() # Waits for table load to complete.
print("Job finished.")
destination_table = client.get_table(table_ref)
print("Loaded {} rows.".format(destination_table.num_rows))
###Output
_____no_output_____
###Markdown
Load Census data in BigQuery.
###Code
load_data_into_bigquery(TRAINING_URL, TRAINING_TABLE_ID)
load_data_into_bigquery(EVAL_URL, EVAL_TABLE_ID)
###Output
Starting job 2ceffef8-e6e4-44bb-9e86-3d97b0501187
Job finished.
Loaded 32561 rows.
Starting job bf66f1b3-2506-408b-9009-c19f4ae9f58a
Job finished.
Loaded 16278 rows.
###Markdown
Confirm that data was importedTODO: replace \ with your PROJECT_IDNote: --use_bqstorage_api will get data using BigQueryStorage API and will make sure that you are authorized to use it. Make sure that it is enabled for your project: https://cloud.google.com/bigquery/docs/reference/storage/enabling_the_api
###Code
%%bigquery --use_bqstorage_api
SELECT * FROM `<YOUR PROJECT>.census_dataset.census_training_table` LIMIT 5
###Output
_____no_output_____
###Markdown
Load census data in TensorFlow DataSet using BigQuery reader Read and transform cesnus data from BigQuery into TensorFlow DataSet
###Code
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
from tensorflow_io.bigquery import BigQueryClient
from tensorflow_io.bigquery import BigQueryReadSession
def transofrom_row(row_dict):
# Trim all string tensors
trimmed_dict = { column:
(tf.strings.strip(tensor) if tensor.dtype == 'string' else tensor)
for (column,tensor) in row_dict.items()
}
# Extract feature column
income_bracket = trimmed_dict.pop('income_bracket')
# Convert feature column to 0.0/1.0
income_bracket_float = tf.cond(tf.equal(tf.strings.strip(income_bracket), '>50K'),
lambda: tf.constant(1.0),
lambda: tf.constant(0.0))
return (trimmed_dict, income_bracket_float)
def read_bigquery(table_name):
tensorflow_io_bigquery_client = BigQueryClient()
read_session = tensorflow_io_bigquery_client.read_session(
"projects/" + PROJECT_ID,
PROJECT_ID, table_name, DATASET_ID,
list(field.name for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
list(dtypes.double if field.field_type == 'FLOAT64'
else dtypes.string for field in CSV_SCHEMA
if not field.name in UNUSED_COLUMNS),
requested_streams=2)
dataset = read_session.parallel_read_rows()
transformed_ds = dataset.map (transofrom_row)
return transformed_ds
BATCH_SIZE = 32
training_ds = read_bigquery(TRAINING_TABLE_ID).shuffle(10000).batch(BATCH_SIZE)
eval_ds = read_bigquery(EVAL_TABLE_ID).batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Define feature columns
###Code
def get_categorical_feature_values(column):
query = 'SELECT DISTINCT TRIM({}) FROM `{}`.{}.{}'.format(column, PROJECT_ID, DATASET_ID, TRAINING_TABLE_ID)
client = bigquery.Client(project=PROJECT_ID)
dataset_ref = client.dataset(DATASET_ID)
job_config = bigquery.QueryJobConfig()
query_job = client.query(query, job_config=job_config)
result = query_job.to_dataframe()
return result.values[:,0]
from tensorflow import feature_column
feature_columns = []
# numeric cols
for header in ['capital_gain', 'capital_loss', 'hours_per_week']:
feature_columns.append(feature_column.numeric_column(header))
# categorical cols
for header in ['workclass', 'marital_status', 'occupation', 'relationship',
'race', 'native_country', 'education']:
categorical_feature = feature_column.categorical_column_with_vocabulary_list(
header, get_categorical_feature_values(header))
categorical_feature_one_hot = feature_column.indicator_column(categorical_feature)
feature_columns.append(categorical_feature_one_hot)
# bucketized cols
age = feature_column.numeric_column('age')
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Build and train model Build model
###Code
Dense = tf.keras.layers.Dense
model = tf.keras.Sequential(
[
feature_layer,
Dense(100, activation=tf.nn.relu, kernel_initializer='uniform'),
Dense(75, activation=tf.nn.relu),
Dense(50, activation=tf.nn.relu),
Dense(25, activation=tf.nn.relu),
Dense(1, activation=tf.nn.sigmoid)
])
# Compile Keras model
model.compile(
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train model
###Code
model.fit(training_ds, epochs=5)
###Output
WARNING:tensorflow:Layer sequential is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4276: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4331: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
Epoch 1/5
1018/1018 [==============================] - 17s 17ms/step - loss: 0.5985 - accuracy: 0.8105
Epoch 2/5
1018/1018 [==============================] - 10s 10ms/step - loss: 0.3670 - accuracy: 0.8324
Epoch 3/5
1018/1018 [==============================] - 11s 10ms/step - loss: 0.3487 - accuracy: 0.8393
Epoch 4/5
1018/1018 [==============================] - 11s 10ms/step - loss: 0.3398 - accuracy: 0.8435
Epoch 5/5
1018/1018 [==============================] - 11s 11ms/step - loss: 0.3377 - accuracy: 0.8455
###Markdown
Evaluate model Evaluate model
###Code
loss, accuracy = model.evaluate(eval_ds)
print("Accuracy", accuracy)
###Output
509/509 [==============================] - 8s 15ms/step - loss: 0.3338 - accuracy: 0.8398
Accuracy 0.8398452
###Markdown
Evaluate a couple of random samples
###Code
sample_x = {
'age' : np.array([56, 36]),
'workclass': np.array(['Local-gov', 'Private']),
'education': np.array(['Bachelors', 'Bachelors']),
'marital_status': np.array(['Married-civ-spouse', 'Married-civ-spouse']),
'occupation': np.array(['Tech-support', 'Other-service']),
'relationship': np.array(['Husband', 'Husband']),
'race': np.array(['White', 'Black']),
'gender': np.array(['Male', 'Male']),
'capital_gain': np.array([0, 7298]),
'capital_loss': np.array([0, 0]),
'hours_per_week': np.array([40, 36]),
'native_country': np.array(['United-States', 'United-States'])
}
model.predict(sample_x)
###Output
_____no_output_____ |
stacks_queues/stack_min/stack_min_solution.ipynb | ###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a stack with push, pop, and min methods running O(1) time.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume this is a stack of ints? * Yes* Can we assume the input values for push are valid? * Yes* If we call this function on an empty stack, can we return sys.maxsize? * Yes* Can we assume we already have a stack class that can be used for this problem? * Yes* Can we assume this fits memory? * Yes Test Cases* Push/pop on empty stack* Push/pop on non-empty stack* Min on empty stack* Min on non-tempty stack AlgorithmWe'll use a second stack to keep track of the minimum values. Min* If the second stack is empty, return an error code (max int value)* Else, return the top of the stack, without popping itComplexity:* Time: O(1)* Space: O(1) Push* Push the data* If the data is less than min * Push data to second stackComplexity:* Time: O(1)* Space: O(1) Pop* Pop the data* If the data is equal to min * Pop the top of the second stack* Return the dataComplexity:* Time: O(1)* Space: O(1) Code
###Code
%run ../stack/stack.py
import sys
class StackMin(Stack):
def __init__(self, top=None):
super(StackMin, self).__init__(top)
self.stack_of_mins = Stack()
def minimum(self):
if self.stack_of_mins.top is None:
return sys.maxsize
else:
return self.stack_of_mins.peek()
def push(self, data):
super(StackMin, self).push(data)
if data < self.minimum():
self.stack_of_mins.push(data)
def pop(self):
data = super(StackMin, self).pop()
if data == self.minimum():
self.stack_of_mins.pop()
return data
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_stack_min.py
from nose.tools import assert_equal
class TestStackMin(object):
def test_stack_min(self):
print('Test: Push on empty stack, non-empty stack')
stack = StackMin()
stack.push(5)
assert_equal(stack.peek(), 5)
assert_equal(stack.minimum(), 5)
stack.push(1)
assert_equal(stack.peek(), 1)
assert_equal(stack.minimum(), 1)
stack.push(3)
assert_equal(stack.peek(), 3)
assert_equal(stack.minimum(), 1)
stack.push(0)
assert_equal(stack.peek(), 0)
assert_equal(stack.minimum(), 0)
print('Test: Pop on non-empty stack')
assert_equal(stack.pop(), 0)
assert_equal(stack.minimum(), 1)
assert_equal(stack.pop(), 3)
assert_equal(stack.minimum(), 1)
assert_equal(stack.pop(), 1)
assert_equal(stack.minimum(), 5)
assert_equal(stack.pop(), 5)
assert_equal(stack.minimum(), sys.maxsize)
print('Test: Pop empty stack')
assert_equal(stack.pop(), None)
print('Success: test_stack_min')
def main():
test = TestStackMin()
test.test_stack_min()
if __name__ == '__main__':
main()
run -i test_stack_min.py
###Output
Test: Push on empty stack, non-empty stack
Test: Pop on non-empty stack
Test: Pop empty stack
Success: test_stack_min
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a stack with push, pop, and min methods running O(1) time.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume this is a stack of ints? * Yes* Can we assume the input values for push are valid? * Yes* If we call this function on an empty stack, can we return sys.maxsize? * Yes* Can we assume we already have a stack class that can be used for this problem? * Yes* Can we assume this fits memory? * Yes Test Cases* Push/pop on empty stack* Push/pop on non-empty stack* Min on empty stack* Min on non-empty stack AlgorithmWe'll use a second stack to keep track of the minimum values. Min* If the second stack is empty, return an error code (max int value)* Else, return the top of the stack, without popping itComplexity:* Time: O(1)* Space: O(1) Push* Push the data* If the data is less than min * Push data to second stackComplexity:* Time: O(1)* Space: O(1) Pop* Pop the data* If the data is equal to min * Pop the top of the second stack* Return the dataComplexity:* Time: O(1)* Space: O(1) Code
###Code
%run ../stack/stack.py
import sys
class StackMin(Stack):
def __init__(self, top=None):
super(StackMin, self).__init__(top)
self.stack_of_mins = Stack()
def minimum(self):
if self.stack_of_mins.top is None:
return sys.maxsize
else:
return self.stack_of_mins.peek()
def push(self, data):
super(StackMin, self).push(data)
if data < self.minimum():
self.stack_of_mins.push(data)
def pop(self):
data = super(StackMin, self).pop()
if data == self.minimum():
self.stack_of_mins.pop()
return data
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_stack_min.py
import unittest
class TestStackMin(unittest.TestCase):
def test_stack_min(self):
print('Test: Push on empty stack, non-empty stack')
stack = StackMin()
stack.push(5)
self.assertEqual(stack.peek(), 5)
self.assertEqual(stack.minimum(), 5)
stack.push(1)
self.assertEqual(stack.peek(), 1)
self.assertEqual(stack.minimum(), 1)
stack.push(3)
self.assertEqual(stack.peek(), 3)
self.assertEqual(stack.minimum(), 1)
stack.push(0)
self.assertEqual(stack.peek(), 0)
self.assertEqual(stack.minimum(), 0)
print('Test: Pop on non-empty stack')
self.assertEqual(stack.pop(), 0)
self.assertEqual(stack.minimum(), 1)
self.assertEqual(stack.pop(), 3)
self.assertEqual(stack.minimum(), 1)
self.assertEqual(stack.pop(), 1)
self.assertEqual(stack.minimum(), 5)
self.assertEqual(stack.pop(), 5)
self.assertEqual(stack.minimum(), sys.maxsize)
print('Test: Pop empty stack')
self.assertEqual(stack.pop(), None)
print('Success: test_stack_min')
def main():
test = TestStackMin()
test.test_stack_min()
if __name__ == '__main__':
main()
run -i test_stack_min.py
###Output
Test: Push on empty stack, non-empty stack
Test: Pop on non-empty stack
Test: Pop empty stack
Success: test_stack_min
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a stack with push, pop, and min methods running O(1) time.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume this is a stack of ints? * Yes* If we call this function on an empty stack, can we return sys.maxsize? * Yes* Can we assume we already have a stack class that can be used for this problem? * Yes Test Cases* Push/pop on empty stack* Push/pop on non-empty stack AlgorithmWe'll use a second stack to keep track of the minimum values. Min* If the second stack is empty, return an error code (max int value)* Else, return the top of the stack, without popping itComplexity:* Time: O(1)* Space: O(1) Push* Push the data* If the data is less than min * Push data to second stackComplexity:* Time: O(1)* Space: O(n) Pop* Pop the data* If the data is equal to min * Pop the top of the second stack* Return the dataComplexity:* Time: O(1)* Space: O(1) Code
###Code
%run ../stack/stack.py
import sys
class MyStack(Stack):
def __init__(self, top=None):
self.min_vals = Stack()
super(MyStack, self).__init__(top)
def min(self):
if self.min_vals.top is None:
return sys.maxsize
else:
return self.min_vals.peek()
def push(self, data):
super(MyStack, self).push(data)
if data < self.min():
self.min_vals.push(data)
def pop(self):
data = super(MyStack, self).pop()
if data == self.min():
self.min_vals.pop()
return data
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_stack_min.py
from nose.tools import assert_equal
class TestStackMin(object):
def test_stack_min(self):
print('Test: Push on empty stack, non-empty stack')
stack = MyStack()
stack.push(5)
assert_equal(stack.peek(), 5)
assert_equal(stack.min(), 5)
stack.push(1)
assert_equal(stack.peek(), 1)
assert_equal(stack.min(), 1)
stack.push(3)
assert_equal(stack.peek(), 3)
assert_equal(stack.min(), 1)
stack.push(0)
assert_equal(stack.peek(), 0)
assert_equal(stack.min(), 0)
print('Test: Pop on non-empty stack')
assert_equal(stack.pop(), 0)
assert_equal(stack.min(), 1)
assert_equal(stack.pop(), 3)
assert_equal(stack.min(), 1)
assert_equal(stack.pop(), 1)
assert_equal(stack.min(), 5)
assert_equal(stack.pop(), 5)
assert_equal(stack.min(), sys.maxsize)
print('Test: Pop empty stack')
assert_equal(stack.pop(), None)
print('Success: test_stack_min')
def main():
test = TestStackMin()
test.test_stack_min()
if __name__ == '__main__':
main()
run -i test_stack_min.py
###Output
Test: Push on empty stack, non-empty stack
Test: Pop on non-empty stack
Test: Pop empty stack
Success: test_stack_min
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a stack with push, pop, and min methods running O(1) time.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume this is a stack of ints? * Yes* Can we assume the input values for push are valid? * Yes* If we call this function on an empty stack, can we return sys.maxsize? * Yes* Can we assume we already have a stack class that can be used for this problem? * Yes* Can we assume this fits memory? * Yes Test Cases* Push/pop on empty stack* Push/pop on non-empty stack* Min on empty stack* Min on non-empty stack AlgorithmWe'll use a second stack to keep track of the minimum values. Min* If the second stack is empty, return an error code (max int value)* Else, return the top of the stack, without popping itComplexity:* Time: O(1)* Space: O(1) Push* Push the data* If the data is less than min * Push data to second stackComplexity:* Time: O(1)* Space: O(1) Pop* Pop the data* If the data is equal to min * Pop the top of the second stack* Return the dataComplexity:* Time: O(1)* Space: O(1) Code
###Code
%run ../stack/stack.py
import sys
class StackMin(Stack):
def __init__(self, top=None):
super(StackMin, self).__init__(top)
self.stack_of_mins = Stack()
def minimum(self):
if self.stack_of_mins.top is None:
return sys.maxsize
else:
return self.stack_of_mins.peek()
def push(self, data):
super(StackMin, self).push(data)
if data < self.minimum():
self.stack_of_mins.push(data)
def pop(self):
data = super(StackMin, self).pop()
if data == self.minimum():
self.stack_of_mins.pop()
return data
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_stack_min.py
from nose.tools import assert_equal
class TestStackMin(object):
def test_stack_min(self):
print('Test: Push on empty stack, non-empty stack')
stack = StackMin()
stack.push(5)
assert_equal(stack.peek(), 5)
assert_equal(stack.minimum(), 5)
stack.push(1)
assert_equal(stack.peek(), 1)
assert_equal(stack.minimum(), 1)
stack.push(3)
assert_equal(stack.peek(), 3)
assert_equal(stack.minimum(), 1)
stack.push(0)
assert_equal(stack.peek(), 0)
assert_equal(stack.minimum(), 0)
print('Test: Pop on non-empty stack')
assert_equal(stack.pop(), 0)
assert_equal(stack.minimum(), 1)
assert_equal(stack.pop(), 3)
assert_equal(stack.minimum(), 1)
assert_equal(stack.pop(), 1)
assert_equal(stack.minimum(), 5)
assert_equal(stack.pop(), 5)
assert_equal(stack.minimum(), sys.maxsize)
print('Test: Pop empty stack')
assert_equal(stack.pop(), None)
print('Success: test_stack_min')
def main():
test = TestStackMin()
test.test_stack_min()
if __name__ == '__main__':
main()
run -i test_stack_min.py
###Output
Test: Push on empty stack, non-empty stack
Test: Pop on non-empty stack
Test: Pop empty stack
Success: test_stack_min
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a stack with push, pop, and min methods running O(1) time.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume this is a stack of ints? * Yes* Can we assume the input values for push are valid? * Yes* If we call this function on an empty stack, can we return sys.maxsize? * Yes* Can we assume we already have a stack class that can be used for this problem? * Yes* Can we assume this fits memory? * Yes Test Cases* Push/pop on empty stack* Push/pop on non-empty stack* Min on empty stack* Min on non-empty stack AlgorithmWe'll use a second stack to keep track of the minimum values. Min* If the second stack is empty, return an error code (max int value)* Else, return the top of the stack, without popping itComplexity:* Time: O(1)* Space: O(1) Push* Push the data* If the data is less than min * Push data to second stackComplexity:* Time: O(1)* Space: O(1) Pop* Pop the data* If the data is equal to min * Pop the top of the second stack* Return the dataComplexity:* Time: O(1)* Space: O(1) Code
###Code
%run ../stack/stack.py
import sys
class StackMin(Stack):
def __init__(self, top=None):
super(StackMin, self).__init__(top)
self.stack_of_mins = Stack()
def minimum(self):
if self.stack_of_mins.top is None:
return sys.maxsize
else:
return self.stack_of_mins.peek()
def push(self, data):
super(StackMin, self).push(data)
if data < self.minimum():
self.stack_of_mins.push(data)
def pop(self):
data = super(StackMin, self).pop()
if data == self.minimum():
self.stack_of_mins.pop()
return data
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_stack_min.py
import unittest
class TestStackMin(unittest.TestCase):
def test_stack_min(self):
print('Test: Push on empty stack, non-empty stack')
stack = StackMin()
stack.push(5)
self.assertEqual(stack.peek(), 5)
self.assertEqual(stack.minimum(), 5)
stack.push(1)
self.assertEqual(stack.peek(), 1)
self.assertEqual(stack.minimum(), 1)
stack.push(3)
self.assertEqual(stack.peek(), 3)
self.assertEqual(stack.minimum(), 1)
stack.push(0)
self.assertEqual(stack.peek(), 0)
self.assertEqual(stack.minimum(), 0)
print('Test: Pop on non-empty stack')
self.assertEqual(stack.pop(), 0)
self.assertEqual(stack.minimum(), 1)
self.assertEqual(stack.pop(), 3)
self.assertEqual(stack.minimum(), 1)
self.assertEqual(stack.pop(), 1)
self.assertEqual(stack.minimum(), 5)
self.assertEqual(stack.pop(), 5)
self.assertEqual(stack.minimum(), sys.maxsize)
print('Test: Pop empty stack')
self.assertEqual(stack.pop(), None)
print('Success: test_stack_min')
def main():
test = TestStackMin()
test.test_stack_min()
if __name__ == '__main__':
main()
run -i test_stack_min.py
###Output
Test: Push on empty stack, non-empty stack
Test: Pop on non-empty stack
Test: Pop empty stack
Success: test_stack_min
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a stack with push, pop, and min methods running O(1) time.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* Can we assume this is a stack of ints? * Yes* Can we assume the input values for push are valid? * Yes* If we call this function on an empty stack, can we return sys.maxsize? * Yes* Can we assume we already have a stack class that can be used for this problem? * Yes* Can we assume this fits memory? * Yes Test Cases* Push/pop on empty stack* Push/pop on non-empty stack AlgorithmWe'll use a second stack to keep track of the minimum values. Min* If the second stack is empty, return an error code (max int value)* Else, return the top of the stack, without popping itComplexity:* Time: O(1)* Space: O(1) Push* Push the data* If the data is less than min * Push data to second stackComplexity:* Time: O(1)* Space: O(1) Pop* Pop the data* If the data is equal to min * Pop the top of the second stack* Return the dataComplexity:* Time: O(1)* Space: O(1) Code
###Code
%run ../stack/stack.py
import sys
class StackMin(Stack):
def __init__(self, top=None):
super(StackMin, self).__init__(top)
self.min_vals = Stack()
def min(self):
if self.min_vals.top is None:
return sys.maxsize
else:
return self.min_vals.peek()
def push(self, data):
super(StackMin, self).push(data)
if data < self.min():
self.min_vals.push(data)
def pop(self):
data = super(StackMin, self).pop()
if data == self.min():
self.min_vals.pop()
return data
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_stack_min.py
from nose.tools import assert_equal
class TestStackMin(object):
def test_stack_min(self):
print('Test: Push on empty stack, non-empty stack')
stack = StackMin()
stack.push(5)
assert_equal(stack.peek(), 5)
assert_equal(stack.min(), 5)
stack.push(1)
assert_equal(stack.peek(), 1)
assert_equal(stack.min(), 1)
stack.push(3)
assert_equal(stack.peek(), 3)
assert_equal(stack.min(), 1)
stack.push(0)
assert_equal(stack.peek(), 0)
assert_equal(stack.min(), 0)
print('Test: Pop on non-empty stack')
assert_equal(stack.pop(), 0)
assert_equal(stack.min(), 1)
assert_equal(stack.pop(), 3)
assert_equal(stack.min(), 1)
assert_equal(stack.pop(), 1)
assert_equal(stack.min(), 5)
assert_equal(stack.pop(), 5)
assert_equal(stack.min(), sys.maxsize)
print('Test: Pop empty stack')
assert_equal(stack.pop(), None)
print('Success: test_stack_min')
def main():
test = TestStackMin()
test.test_stack_min()
if __name__ == '__main__':
main()
run -i test_stack_min.py
###Output
Test: Push on empty stack, non-empty stack
Test: Pop on non-empty stack
Test: Pop empty stack
Success: test_stack_min
|
codes/labs_lecture04/lab01_cross_entropy/cross_entropy_demo.ipynb | ###Markdown
Lab 01 : Cross-entropy loss -- demo
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
# find automatically the path of the folder containing "file_name" :
file_name = 'cross_entropy_demo.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
# if previous search failed or too long, comment the previous line and simply write down manually the path below :
#path_to_file = '/content/gdrive/My Drive/AI6103_2020_codes/codes/labs_lecture04/lab01_cross_entropy'
print(path_to_file)
# change current path to the folder containing "file_name"
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import utils
###Output
_____no_output_____
###Markdown
Make a Cross Entropy Criterion and call it mycrit
###Code
mycrit=nn.CrossEntropyLoss()
print(mycrit)
###Output
CrossEntropyLoss()
###Markdown
Make a batch of labels
###Code
labels=torch.LongTensor([2,3])
print(labels)
###Output
tensor([2, 3])
###Markdown
Make a batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 5, -0.5], [1.4, -1.7 , -1.3, 5.0] ])
print(scores)
utils.display_scores(scores)
###Output
_____no_output_____
###Markdown
compute the average loss on this batch
###Code
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
loss = 0.023508310317993164
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 3.1, -0.5], [1.4, -1.7 , -1.3, 2.0] ])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[-1.2000, 0.5000, 3.1000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 2.0000]])
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[0.8, 2.3, -1.0, -1.2] , [-1.2, 1.3, 5.0 , -2.0 ] ] )
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[ 0.8000, 2.3000, -1.0000, -1.2000],
[-1.2000, 1.3000, 5.0000, -2.0000]])
###Markdown
Lab 01 : Cross-entropy loss -- demo
###Code
import torch
import torch.nn as nn
import utils
###Output
_____no_output_____
###Markdown
Make a Cross Entropy Criterion and call it mycrit
###Code
mycrit=nn.CrossEntropyLoss()
print(mycrit)
###Output
CrossEntropyLoss()
###Markdown
Make a batch of labels
###Code
labels=torch.LongTensor([2,3])
print(labels)
###Output
tensor([2, 3])
###Markdown
Make a batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 5, -0.5], [1.4, -1.7 , -1.3, 5.0] ])
print(scores)
utils.display_scores(scores)
###Output
_____no_output_____
###Markdown
compute the average loss on this batch
###Code
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
loss = 0.023508310317993164
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 3.1, -0.5], [1.4, -1.7 , -1.3, 2.0] ])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[-1.2000, 0.5000, 3.1000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 2.0000]])
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[0.8, 2.3, -1.0, -1.2] , [-1.2, 1.3, 5.0 , -2.0 ] ] )
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[ 0.8000, 2.3000, -1.0000, -1.2000],
[-1.2000, 1.3000, 5.0000, -2.0000]])
###Markdown
Lab 01 : Cross-entropy loss -- demo https://blog.csdn.net/lz_peter/article/details/84574716 https://blog.csdn.net/weixin_41122036/article/details/103270152 https://zhuanlan.zhihu.com/p/137791367
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
path_to_file = '/content/gdrive/My Drive/CS4243_codes/codes/labs_lecture04/lab01_cross_entropy'
print(path_to_file)
# move to Google Drive directory
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import utils
###Output
_____no_output_____
###Markdown
Make a Cross Entropy Criterion and call it mycrit
###Code
mycrit=nn.CrossEntropyLoss()
print(mycrit)
###Output
CrossEntropyLoss()
###Markdown
Make a batch of labels
###Code
labels=torch.LongTensor([2,3])
print(labels)
###Output
tensor([2, 3])
###Markdown
Make a batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 5, -0.5], [1.4, -1.7 , -1.3, 5.0] ])
print(scores)
utils.display_scores(scores)
###Output
_____no_output_____
###Markdown
compute the average loss on this batch
###Code
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
loss = 0.023508397862315178
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 3.1, -0.5], [1.4, -1.7 , -1.3, 2.0] ])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[-1.2000, 0.5000, 3.1000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 2.0000]])
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[0.8, 2.3, -1.0, -1.2] , [-1.2, 1.3, 5.0 , -2.0 ] ] )
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[ 0.8000, 2.3000, -1.0000, -1.2000],
[-1.2000, 1.3000, 5.0000, -2.0000]])
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[-5, -5, 5, -5] , [-5, -5, -5, 5]])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item())
###Output
tensor([[-5., -5., 5., -5.],
[-5., -5., -5., 5.]])
###Markdown
自己跟据公式计算loss
###Code
import math
# 两个输入样本(Batch Size = 2)和两个目标分类,一共4个分类
# 第一步
first = []
# 两个输入样本和两个目标分类,因此range = 2
for i in range(2):
# labels[0] = 2
# scores[0][2] = 5
# labels[1] = 3
# scores[1][3] = 5
# 获得目标分类的分数
val = scores[i][labels[i]]
first.append(val)
print(first)
# 第二步
second = []
sum = 0
# 两个输入样本和两个目标分类,因此range = 2
for i in range(2):
# 一共4个类别,因此range = 4
for j in range(4):
# softmax第一步:指数化
# 把两个输入样本的4个类别的数据指数化并求和,求和是为了下面softmax第二步:归一化
sum += math.exp(scores[i][j])
second.append(sum)
sum = 0
print(second)
L = []
# softmax第二步:归一化
# 分别求两个输入样本的loss
for i in range(2):
# 这里是经过优化的公式,其实就是从对归一化的公式求log后得到的
l = -first[i] + math.log(second[i])
L.append(l)
print(L)
Loss = 0
# 两个输入样本和两个目标分类,因此range = 2
for i in range(2):
Loss += L[i]
# 求平均损失
Loss = Loss / 2 # 需要除以2
print('自己计算的loss', Loss)
###Output
[tensor(5.), tensor(5.)]
[148.43337294357386, 148.43337294357386]
[tensor(0.0001), tensor(0.0001)]
自己计算的loss tensor(0.0001)
###Markdown
使用LogSoftmax
###Code
# 两个输入样本(Batch Size = 2)和两个目标分类,一共4个分类
# 第一步
first = []
# 两个输入样本和两个目标分类,因此range = 2
for i in range(2):
# labels[0] = 2
# scores[0][2] = 5
# labels[1] = 3
# scores[1][3] = 5
# 获得目标分类的分数
val = scores[i][labels[i]]
first.append(val)
print(first)
# 直接求LogSoftmax
LogSoftmax = nn.LogSoftmax(dim=1)
log_probs = LogSoftmax(scores)
print("log_probs:\n", log_probs)
L = []
# 分别求两个输入样本的loss
for i in range(2):
l = -log_probs[i][labels[i]]
L.append(l)
print(L)
Loss = 0
# 两个输入样本和两个目标分类,因此range = 2
for i in range(2):
Loss += L[i]
# 求平均损失
Loss = Loss / 2 # 需要除以2
print('自己计算的loss', Loss)
###Output
[tensor(5.), tensor(5.)]
log_probs:
tensor([[-1.0000e+01, -1.0000e+01, -1.3625e-04, -1.0000e+01],
[-1.0000e+01, -1.0000e+01, -1.0000e+01, -1.3625e-04]])
[tensor(0.0001), tensor(0.0001)]
自己计算的loss tensor(0.0001)
###Markdown
使用LogSoftmax
###Code
# 两个输入样本(Batch Size = 2)和两个目标分类,一共4个分类
# 第一步
first = []
# 两个输入样本和两个目标分类,因此range = 2
for i in range(2):
# labels[0] = 2
# scores[0][2] = 5
# labels[1] = 3
# scores[1][3] = 5
# 获得目标分类的分数
val = scores[i][labels[i]]
first.append(val)
print(first)
# 直接求Softmax
Softmax = nn.Softmax(dim=1)
probs = Softmax(scores)
print("probs:\n", probs)
L = []
# 分别求两个输入样本的loss
for i in range(2):
# 先求log
l = -math.log(probs[i][labels[i]])
L.append(l)
print(L)
Loss = 0
# 两个输入样本和两个目标分类,因此range = 2
for i in range(2):
Loss += L[i]
# 求平均损失
Loss = Loss / 2 # 需要除以2
print('自己计算的loss', Loss)
###Output
[tensor(5.), tensor(5.)]
probs:
tensor([[4.5394e-05, 4.5394e-05, 9.9986e-01, 4.5394e-05],
[4.5394e-05, 4.5394e-05, 4.5394e-05, 9.9986e-01]])
[0.00013626550167832833, 0.00013626550167832833]
自己计算的loss 0.00013626550167832833
###Markdown
Lab 01 : Cross-entropy loss -- demo
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
# find automatically the path of the folder containing "file_name" :
file_name = 'cross_entropy_demo.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
# if previous search failed or too long, comment the previous line and simply write down manually the path below :
#path_to_file = '/content/gdrive/My Drive/CE7454_2020_codes/codes/labs_lecture04/lab01_cross_entropy'
print(path_to_file)
# change current path to the folder containing "file_name"
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import utils
###Output
_____no_output_____
###Markdown
Make a Cross Entropy Criterion and call it mycrit
###Code
mycrit=nn.CrossEntropyLoss()
print(mycrit)
###Output
CrossEntropyLoss()
###Markdown
Make a batch of labels
###Code
labels=torch.LongTensor([2,3])
print(labels)
###Output
tensor([2, 3])
###Markdown
Make a batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 5, -0.5], [1.4, -1.7 , -1.3, 5.0] ])
print(scores)
utils.display_scores(scores)
###Output
_____no_output_____
###Markdown
compute the average loss on this batch
###Code
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
loss = 0.023508310317993164
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 3.1, -0.5], [1.4, -1.7 , -1.3, 2.0] ])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[-1.2000, 0.5000, 3.1000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 2.0000]])
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[0.8, 2.3, -1.0, -1.2] , [-1.2, 1.3, 5.0 , -2.0 ] ] )
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[ 0.8000, 2.3000, -1.0000, -1.2000],
[-1.2000, 1.3000, 5.0000, -2.0000]])
###Markdown
Lab 01 : Cross-entropy loss -- demo
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
path_to_file = '/content/gdrive/My Drive/CS4243_codes/codes/labs_lecture04/lab01_cross_entropy'
print(path_to_file)
# move to Google Drive directory
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import utils
###Output
_____no_output_____
###Markdown
Make a Cross Entropy Criterion and call it mycrit
###Code
mycrit=nn.CrossEntropyLoss()
print(mycrit)
###Output
CrossEntropyLoss()
###Markdown
Make a batch of labels
###Code
labels=torch.LongTensor([2,3])
print(labels)
###Output
tensor([2, 3])
###Markdown
Make a batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 5, -0.5], [1.4, -1.7 , -1.3, 5.0] ])
print(scores)
utils.display_scores(scores)
###Output
_____no_output_____
###Markdown
compute the average loss on this batch
###Code
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
loss = 0.023508310317993164
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 3.1, -0.5], [1.4, -1.7 , -1.3, 2.0] ])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[-1.2000, 0.5000, 3.1000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 2.0000]])
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[0.8, 2.3, -1.0, -1.2] , [-1.2, 1.3, 5.0 , -2.0 ] ] )
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[ 0.8000, 2.3000, -1.0000, -1.2000],
[-1.2000, 1.3000, 5.0000, -2.0000]])
###Markdown
Lab 01 : Cross-entropy loss -- demo
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
# find automatically the path of the folder containing "file_name" :
file_name = 'cross_entropy_demo.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
# if previous search failed or too long, comment the previous line and simply write down manually the path below :
#path_to_file = '/content/gdrive/My Drive/CE7454_2020_codes/codes/labs_lecture04/lab01_cross_entropy'
print(path_to_file)
# change current path to the folder containing "file_name"
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import utils
###Output
_____no_output_____
###Markdown
Make a Cross Entropy Criterion and call it mycrit
###Code
mycrit=nn.CrossEntropyLoss()
print(mycrit)
###Output
CrossEntropyLoss()
###Markdown
Make a batch of labels
###Code
labels=torch.LongTensor([2,3])
print(labels)
###Output
tensor([2, 3])
###Markdown
Make a batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 5, -0.5], [1.4, -1.7 , -1.3, 5.0] ])
print(scores)
utils.display_scores(scores)
###Output
_____no_output_____
###Markdown
compute the average loss on this batch
###Code
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
loss = 0.023508397862315178
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 3.1, -0.5], [1.4, -1.7 , -1.3, 2.0] ])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[-1.2000, 0.5000, 3.1000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 2.0000]])
loss = 0.2927485406398773
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[0.8, 2.3, -1.0, -1.2] , [-1.2, 1.3, 5.0 , -2.0 ] ] )
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[ 0.8000, 2.3000, -1.0000, -1.2000],
[-1.2000, 1.3000, 5.0000, -2.0000]])
loss = 5.291047096252441
###Markdown
Lab 01 : Cross-entropy loss -- demo
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/gdrive')
file_name = 'cross_entropy_demo.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
print(path_to_file)
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import utils
###Output
_____no_output_____
###Markdown
Make a Cross Entropy Criterion and call it mycrit
###Code
mycrit=nn.CrossEntropyLoss()
print(mycrit)
###Output
CrossEntropyLoss()
###Markdown
Make a batch of labels
###Code
labels=torch.LongTensor([2,3])
print(labels)
###Output
tensor([2, 3])
###Markdown
Make a batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 5, -0.5], [1.4, -1.7 , -1.3, 5.0] ])
print(scores)
utils.display_scores(scores)
###Output
_____no_output_____
###Markdown
compute the average loss on this batch
###Code
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
loss = 0.023508310317993164
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 3.1, -0.5], [1.4, -1.7 , -1.3, 2.0] ])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[-1.2000, 0.5000, 3.1000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 2.0000]])
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[0.8, 2.3, -1.0, -1.2] , [-1.2, 1.3, 5.0 , -2.0 ] ] )
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[ 0.8000, 2.3000, -1.0000, -1.2000],
[-1.2000, 1.3000, 5.0000, -2.0000]])
###Markdown
Lab 01 : Cross-entropy loss -- demo
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/gdrive')
file_name = 'cross_entropy_demo.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
print(path_to_file)
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.nn.functional as F
import utils
###Output
_____no_output_____
###Markdown
Make a Cross Entropy Criterion and call it mycrit
###Code
mycrit=nn.CrossEntropyLoss()
print(mycrit)
###Output
CrossEntropyLoss()
###Markdown
Make a batch of labels
###Code
labels=torch.LongTensor([2,3])
print(labels)
###Output
tensor([2, 3])
###Markdown
Make a batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 5, -0.5], [1.4, -1.7 , -1.3, 5.0] ])
print(scores)
utils.display_scores(scores)
###Output
_____no_output_____
###Markdown
compute the average loss on this batch
###Code
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
print(scores, '\n')
s = F.softmax(scores, dim=1)
print(s, '\n')
p = F.log_softmax(scores, dim=1)
print(p, '\n')
loss = 1/2 * (- p[0][2] - p[1][3])
print(loss.item())
###Output
tensor([[-1.2000, 0.5000, 5.0000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 5.0000]])
tensor([[0.0020, 0.0109, 0.9831, 0.0040],
[0.0265, 0.0012, 0.0018, 0.9705]])
tensor([[-6.2171, -4.5171, -0.0171, -5.5171],
[-3.6299, -6.7299, -6.3299, -0.0299]])
0.023508397862315178
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 3.1, -0.5], [1.4, -1.7 , -1.3, 2.0] ])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[-1.2000, 0.5000, 3.1000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 2.0000]])
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[0.8, 2.3, -1.0, -1.2] , [-1.2, 1.3, 5.0 , -2.0 ] ] )
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
s = torch.Tensor([[-1.2, 1.3, -2.4, 5]])
p = F.softmax(s, dim=1)
print(p)
###Output
tensor([[1.9754e-03, 2.4065e-02, 5.9497e-04, 9.7336e-01]])
###Markdown
Lab 01 : Cross-entropy loss -- demo
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
# find automatically the path of the folder containing "file_name" :
file_name = 'cross_entropy_demo.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
# if previous search failed or too long, comment the previous line and simply write down manually the path below :
#path_to_file = '/content/gdrive/My Drive/CS5242_2021_codes/codes/labs_lecture04/lab01_cross_entropy'
print(path_to_file)
# change current path to the folder containing "file_name"
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import utils
###Output
_____no_output_____
###Markdown
Make a Cross Entropy Criterion and call it mycrit
###Code
mycrit=nn.CrossEntropyLoss()
print(mycrit)
###Output
CrossEntropyLoss()
###Markdown
Make a batch of labels
###Code
labels=torch.LongTensor([2,3])
print(labels)
###Output
tensor([2, 3])
###Markdown
Make a batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 5, -0.5], [1.4, -1.7 , -1.3, 5.0] ])
print(scores)
utils.display_scores(scores)
###Output
_____no_output_____
###Markdown
compute the average loss on this batch
###Code
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
loss = 0.023508310317993164
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 3.1, -0.5], [1.4, -1.7 , -1.3, 2.0] ])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[-1.2000, 0.5000, 3.1000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 2.0000]])
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[0.8, 2.3, -1.0, -1.2] , [-1.2, 1.3, 5.0 , -2.0 ] ] )
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[ 0.8000, 2.3000, -1.0000, -1.2000],
[-1.2000, 1.3000, 5.0000, -2.0000]])
###Markdown
Lab 01 : Cross-entropy loss -- demo
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
# find automatically the path of the folder containing "file_name" :
file_name = 'cross_entropy_demo.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
# if previous search failed or too long, comment the previous line and simply write down manually the path below :
#path_to_file = '/content/gdrive/My Drive/CS5242_2021_codes/codes/labs_lecture04/lab01_cross_entropy'
print(path_to_file)
# change current path to the folder containing "file_name"
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import utils
###Output
_____no_output_____
###Markdown
Make a Cross Entropy Criterion and call it mycrit
###Code
mycrit=nn.CrossEntropyLoss()
print(mycrit)
###Output
CrossEntropyLoss()
###Markdown
Make a batch of labels
###Code
labels=torch.LongTensor([2,3])
print(labels)
###Output
tensor([2, 3])
###Markdown
Make a batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 5, -0.5], [1.4, -1.7 , -1.3, 5.0] ])
print(scores)
utils.display_scores(scores)
###Output
_____no_output_____
###Markdown
compute the average loss on this batch
###Code
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
average_loss
average_loss.item()
###Output
_____no_output_____
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 3.1, -0.5], [1.4, -1.7 , -1.3, 2.0] ])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[-1.2000, 0.5000, 3.1000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 2.0000]])
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[0.8, 2.3, -1.0, -1.2] , [-1.2, 1.3, 5.0 , -2.0 ] ] )
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[ 0.8000, 2.3000, -1.0000, -1.2000],
[-1.2000, 1.3000, 5.0000, -2.0000]])
###Markdown
Lab 01 : Cross-entropy loss -- demo
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
# find automatically the path of the folder containing "file_name" :
file_name = 'cross_entropy_demo.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
# if previous search failed or too long, comment the previous line and simply write down manually the path below :
#path_to_file = '/content/gdrive/My Drive/CS5242_2021_codes/codes/labs_lecture04/lab01_cross_entropy'
print(path_to_file)
# change current path to the folder containing "file_name"
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import utils
###Output
_____no_output_____
###Markdown
Make a Cross Entropy Criterion and call it mycrit
###Code
mycrit=nn.CrossEntropyLoss()
print(mycrit)
###Output
CrossEntropyLoss()
###Markdown
Make a batch of labels
###Code
labels=torch.LongTensor([2,3])
print(labels)
###Output
tensor([2, 3])
###Markdown
Make a batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 5, -0.5], [1.4, -1.7 , -1.3, 5.0] ])
print(scores)
utils.display_scores(scores)
###Output
_____no_output_____
###Markdown
compute the average loss on this batch
###Code
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
loss = 0.023508310317993164
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([ [-1.2, 0.5 , 3.1, -0.5], [1.4, -1.7 , -1.3, 2.0] ])
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[-1.2000, 0.5000, 3.1000, -0.5000],
[ 1.4000, -1.7000, -1.3000, 2.0000]])
###Markdown
Try with a different batch of scores
###Code
scores=torch.Tensor([[0.8, 2.3, -1.0, -1.2] , [-1.2, 1.3, 5.0 , -2.0 ] ] )
print(scores)
utils.display_scores(scores)
average_loss = mycrit(scores,labels)
print('loss = ', average_loss.item() )
###Output
tensor([[ 0.8000, 2.3000, -1.0000, -1.2000],
[-1.2000, 1.3000, 5.0000, -2.0000]])
|
tutorials/W1D1_ModelTypes/student/W1D1_Tutorial3.ipynb | ###Markdown
Neuromatch Academy: Week 1, Day 1, Tutorial 3 Model Types: "Why" models__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael WaskomWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. ___ Tutorial ObjectivesThis is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
#@title Video 1: “Why” models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='OOIDEr1e5Gg', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=OOIDEr1e5Gg
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Section 1: Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = ...
# implement the equation for Shannon entropy (in bits)
h = ...
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Uncomment to test your entropy function
# print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_55c07dc8.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Section 2: Information, neurons, and spikes
###Code
#@title Video 2: Entropy of different distributions
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='o6nyrx3KH20', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=o6nyrx3KH20
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
#@title Video 3: Probabilities from histogram
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='e2U_-07O9jo', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=e2U_-07O9jo
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Exercise 2: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = ...
return pmf
# Uncomment when ready to test your function
# pmf = pmf_from_counts(counts)
# plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_49231923.py)*Example output:* Section 3: Calculate entropy from a PMF
###Code
#@title Video 4: Calculating entropy from pmf
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='Xjy-jj-6Oz0', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=Xjy-jj-6Oz0
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
Entropy for Neuron 283: 3.36 bits
###Markdown
Interactive Demo: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Summary
###Code
#@title Video 5: Summary of model types
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='X4K2RR5qBK8', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=X4K2RR5qBK8
###Markdown
Neuromatch Academy: Week 1, Day 1, Tutorial 3 Model Types: "Why" models__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael WaskomWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. ___ Tutorial ObjectivesThis is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
#@title Video 1: “Why” models
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV16t4y1Q7DR', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtube.com/watch?v=OOIDEr1e5Gg
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Section 1: Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = ...
# implement the equation for Shannon entropy (in bits)
h = ...
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Uncomment to test your entropy function
# print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_55c07dc8.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Section 2: Information, neurons, and spikes
###Code
#@title Video 2: Entropy of different distributions
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1df4y1976g', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtube.com/watch?v=o6nyrx3KH20
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
#@title Video 3: Probabilities from histogram
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1Jk4y1B7cz', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtube.com/watch?v=e2U_-07O9jo
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Exercise 2: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = ...
return pmf
# Uncomment when ready to test your function
# pmf = pmf_from_counts(counts)
# plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_49231923.py)*Example output:* Section 3: Calculate entropy from a PMF
###Code
#@title Video 4: Calculating entropy from pmf
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1vA411e7Cd', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtube.com/watch?v=Xjy-jj-6Oz0
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
Entropy for Neuron 283: 3.36 bits
###Markdown
Interactive Demo: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Summary
###Code
#@title Video 5: Summary of model types
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1F5411e7ww', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtube.com/watch?v=X4K2RR5qBK8
###Markdown
Tutorial 3: "Why" models**Week 1, Day 1: Model Types****By Neuromatch Academy**__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael Waskom, Ella BattyWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** ___ Tutorial Objectives*Estimated timing of tutorial: 45 minutes*This is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/6dxwe/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# @title Video 1: “Why” models
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV16t4y1Q7DR", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="OOIDEr1e5Gg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Plotting Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
--- Section 1: Optimization and Information*Remember that the notation section is located after the Summary for quick reference!*Neurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) = -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Bonus Section 1 for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well (e.g. when $b=e$ we call the units *nats*). First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? We will compute this in the next exercise. Coding Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = ...
# implement the equation for Shannon entropy (in bits)
h = ...
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Call entropy function and print result
print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_f07b571c.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$. Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on - the following plot shows a PMF that would also have zero entropy.
###Code
# @markdown Execute this cell to visualize another PMF with zero entropy
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
# @markdown Execute this cell to visualize a PMF with split mass
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is: $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: : $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align} If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
# @markdown Execute this cell to visualize a PMF of uniform distribution
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. --- Section 2: Information, neurons, and spikes*Estimated timing to here from start of tutorial: 20 min*
###Code
# @title Video 2: Entropy of different distributions
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1df4y1976g", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="o6nyrx3KH20", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
###Output
_____no_output_____
###Markdown
--- Section 3: Calculate entropy of ISI distributions from data*Estimated timing to here from start of tutorial: 25 min* Section 3.1: Computing probabilities from histogram
###Code
# @title Video 3: Probabilities from histogram
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Jk4y1B7cz", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="e2U_-07O9jo", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Coding Exercise 3.1: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = ...
return pmf
# Get neuron index
neuron_idx = 283
# Get counts of ISIs from Steinmetz data
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
# Compute pmf
pmf = pmf_from_counts(counts)
# Visualize
plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_2db0a1bc.py)*Example output:* Section 3.2: Calculating entropy from pmf
###Code
# @title Video 4: Calculating entropy from pmf
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1vA411e7Cd", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Xjy-jj-6Oz0", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
Interactive Demo 3.2: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Section 4: Reflecting on why models*Estimated timing to here from start of tutorial: 35 min* Think! 3: Reflecting on how modelsPlease discuss the following questions for around 10 minutes with your group:- Have you seen why models before?- Have you ever done one?- Why are why models useful?- When are they possible? Does your field have why models?- What do we learn from constructing them? --- Summary*Estimated timing of tutorial: 45 minutes*
###Code
# @title Video 5: Summary of model types
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1F5411e7ww", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="X4K2RR5qBK8", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Tutorial 3: "Why" models**Week 1, Day 1: Model Types****By Neuromatch Academy**__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael WaskomWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. ___ Tutorial ObjectivesThis is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
#@title Video 1: “Why” models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='OOIDEr1e5Gg', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Section 1: Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = ...
# implement the equation for Shannon entropy (in bits)
h = ...
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Uncomment to test your entropy function
# print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_3dc69011.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Section 2: Information, neurons, and spikes
###Code
#@title Video 2: Entropy of different distributions
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='o6nyrx3KH20', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
#@title Video 3: Probabilities from histogram
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='e2U_-07O9jo', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Exercise 2: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = ...
return pmf
# Uncomment when ready to test your function
# pmf = pmf_from_counts(counts)
# plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_49231923.py)*Example output:* Section 3: Calculate entropy from a PMF
###Code
#@title Video 4: Calculating entropy from pmf
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='Xjy-jj-6Oz0', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
Interactive Demo: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Summary
###Code
#@title Video 5: Summary of model types
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='X4K2RR5qBK8', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
###Code
# Mount Google Drive
from google.colab import drive # import drive from google colab
ROOT = "/content/drive" # default location for the drive
print(ROOT) # print content of ROOT (Optional)
drive.mount(ROOT,force_remount=True)
###Output
/content/drive
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly
Enter your authorization code:
··········
Mounted at /content/drive
###Markdown
Neuromatch Academy: Week 1, Day 1, Tutorial 3 Model Types: "Why" models__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael WaskomWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. ___ Tutorial ObjectivesThis is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
#@title Video 1: “Why” models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='OOIDEr1e5Gg', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=OOIDEr1e5Gg
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Section 1: Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? 0 Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
#raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf>0]
# implement the equation for Shannon entropy (in bits)
h = -np.sum(np.multiply(pmf,np.log2(pmf)))
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Uncomment to test your entropy function
print(f"{entropy(pmf):.2f} bits")
###Output
0.00 bits
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_55c07dc8.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Section 2: Information, neurons, and spikes
###Code
#@title Video 2: Entropy of different distributions
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='o6nyrx3KH20', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=o6nyrx3KH20
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
###Output
Deterministic: 0.00 bits
Uniform: 3.32 bits
Exponential: 3.77 bits
###Markdown
The entropy here can be greater than the uniform because the exponential input range is unbounded
###Code
#@title Video 3: Probabilities from histogram
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='e2U_-07O9jo', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=e2U_-07O9jo
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Exercise 2: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
#raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = counts/np.sum(counts)
return pmf
# Uncomment when ready to test your function
pmf = pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_49231923.py)*Example output:* Section 3: Calculate entropy from a PMF
###Code
#@title Video 4: Calculating entropy from pmf
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='Xjy-jj-6Oz0', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=Xjy-jj-6Oz0
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
Entropy for Neuron 283: 3.36 bits
###Markdown
Interactive Demo: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Summary
###Code
#@title Video 5: Summary of model types
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='X4K2RR5qBK8', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=X4K2RR5qBK8
###Markdown
Neuromatch Academy: Week 1, Day 1, Tutorial 3 Model Types: "Why" models__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael WaskomWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. ___ Tutorial ObjectivesThis is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
#@title Video 1: “Why” models
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV16t4y1Q7DR', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV16t4y1Q7DR
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Section 1: Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = ...
# implement the equation for Shannon entropy (in bits)
h = ...
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Uncomment to test your entropy function
# print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_55c07dc8.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Section 2: Information, neurons, and spikes
###Code
#@title Video 2: Entropy of different distributions
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1df4y1976g', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1df4y1976g
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
#@title Video 3: Probabilities from histogram
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1Jk4y1B7cz', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1Jk4y1B7cz
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Exercise 2: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = ...
return pmf
# Uncomment when ready to test your function
# pmf = pmf_from_counts(counts)
# plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_49231923.py)*Example output:* Section 3: Calculate entropy from a PMF
###Code
#@title Video 4: Calculating entropy from pmf
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1vA411e7Cd', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1vA411e7Cd
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
Entropy for Neuron 283: 3.36 bits
###Markdown
Interactive Demo: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Summary
###Code
#@title Video 5: Summary of model types
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1F5411e7ww', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1F5411e7ww
###Markdown
Neuromatch Academy: Week 1, Day 1, Tutorial 3 Model Types: "Why" models__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael WaskomWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. ___ Tutorial ObjectivesThis is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
#@title Video 1: “Why” models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='OOIDEr1e5Gg', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=OOIDEr1e5Gg
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Section 1: Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
# raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf>0]
# implement the equation for Shannon entropy (in bits)
h = -np.sum(pmf*np.log2(pmf))
# return the absolute value (avoids getting a -0 result)
return h#np.abs(h)
# Uncomment to test your entropy function
print(f"{entropy(pmf):.2f} bits")
###Output
-0.00 bits
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_55c07dc8.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
entropy(pmf)
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
print(entropy(pmf))
###Output
1.0
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
print(entropy(pmf))
###Output
5.643856189774725
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Section 2: Information, neurons, and spikes
###Code
#@title Video 2: Entropy of different distributions
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='o6nyrx3KH20', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=o6nyrx3KH20
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
#@title Video 3: Probabilities from histogram
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='e2U_-07O9jo', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=e2U_-07O9jo
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Exercise 2: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
# raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = counts/np.sum(counts)
return pmf
# Uncomment when ready to test your function
pmf = pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_49231923.py)*Example output:* Section 3: Calculate entropy from a PMF
###Code
#@title Video 4: Calculating entropy from pmf
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='Xjy-jj-6Oz0', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=Xjy-jj-6Oz0
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
Entropy for Neuron 283: 3.36 bits
###Markdown
Interactive Demo: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
counts = [np.histogram(np.diff(i),bins=50,range=isi_range)[0] for i in steinmetz_spikes]
entropys = [entropy(count/np.sum(count)) for count in counts]
for i in np.argsort(entropys)[::-1][:5]:
plt.hist(np.diff(steinmetz_spikes[i]),bins=50,range=(0,10),alpha=0.5)
###Output
_____no_output_____
###Markdown
--- Summary
###Code
#@title Video 5: Summary of model types
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='X4K2RR5qBK8', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=X4K2RR5qBK8
###Markdown
Tutorial 3: "Why" models**Week 1, Day 1: Model Types****By Neuromatch Academy**__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael Waskom, Ella BattyWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** ___ Tutorial Objectives*Estimated timing of tutorial: 45 minutes*This is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/6dxwe/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# @title Video 1: “Why” models
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV16t4y1Q7DR", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="OOIDEr1e5Gg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Plotting Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
--- Section 1: Optimization and Information*Remember that the notation section is located after the Summary for quick reference!*Neurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) = -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Bonus Section 1 for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well (e.g. when $b=e$ we call the units *nats*). First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? We will compute this in the next exercise. Coding Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
#raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
# implement the equation for Shannon entropy (in bits)
h = - np.sum(pmf * np.log2(pmf))
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Call entropy function and print result
print(f"{entropy(pmf):.2f} bits")
###Output
0.00 bits
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_f07b571c.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$. Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on - the following plot shows a PMF that would also have zero entropy.
###Code
# @markdown Execute this cell to visualize another PMF with zero entropy
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
# @markdown Execute this cell to visualize a PMF with split mass
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is: $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: : $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align} If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
# @markdown Execute this cell to visualize a PMF of uniform distribution
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. --- Section 2: Information, neurons, and spikes*Estimated timing to here from start of tutorial: 20 min*
###Code
# @title Video 2: Entropy of different distributions
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1df4y1976g", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="o6nyrx3KH20", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
###Output
Deterministic: 0.00 bits
Uniform: 3.32 bits
Exponential: 3.77 bits
###Markdown
--- Section 3: Calculate entropy of ISI distributions from data*Estimated timing to here from start of tutorial: 25 min* Section 3.1: Computing probabilities from histogram
###Code
# @title Video 3: Probabilities from histogram
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Jk4y1B7cz", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="e2U_-07O9jo", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Coding Exercise 3.1: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
#raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = counts / np.sum(counts)
return pmf
# Get neuron index
neuron_idx = 283
# Get counts of ISIs from Steinmetz data
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
# Compute pmf
pmf = pmf_from_counts(counts)
# Visualize
plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_2db0a1bc.py)*Example output:* Section 3.2: Calculating entropy from pmf
###Code
# @title Video 4: Calculating entropy from pmf
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1vA411e7Cd", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Xjy-jj-6Oz0", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
Entropy for Neuron 283: 3.36 bits
###Markdown
Interactive Demo 3.2: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Section 4: Reflecting on why models*Estimated timing to here from start of tutorial: 35 min* Think! 3: Reflecting on how modelsPlease discuss the following questions for around 10 minutes with your group:- Have you seen why models before?- Have you ever done one?- Why are why models useful?- When are they possible? Does your field have why models?- What do we learn from constructing them? --- Summary*Estimated timing of tutorial: 45 minutes*
###Code
# @title Video 5: Summary of model types
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1F5411e7ww", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="X4K2RR5qBK8", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Tutorial 3: "Why" models**Week 1, Day 1: Model Types****By Neuromatch Academy**__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael Waskom, Ella BattyWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** ___ Tutorial Objectives*Estimated timing of tutorial: 45 minutes*This is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/6dxwe/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# @title Video 1: “Why” models
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV16t4y1Q7DR", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="OOIDEr1e5Gg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Plotting Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
--- Section 1: Optimization and Information*Remember that the notation section is located after the Summary for quick reference!*Neurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) = -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Bonus Section 1 for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well (e.g. when $b=e$ we call the units *nats*). First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? We will compute this in the next exercise. Coding Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
# raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf>0]
# implement the equation for Shannon entropy (in bits)
h = -np.sum(pmf-np.log2(pmf))
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Call entropy function and print result
print(f"{entropy(pmf):.2f} bits")
###Output
1.00 bits
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_f07b571c.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$. Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on - the following plot shows a PMF that would also have zero entropy.
###Code
# @markdown Execute this cell to visualize another PMF with zero entropy
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
# @markdown Execute this cell to visualize a PMF with split mass
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is: $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: : $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align} If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
# @markdown Execute this cell to visualize a PMF of uniform distribution
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. --- Section 2: Information, neurons, and spikes*Estimated timing to here from start of tutorial: 20 min*
###Code
# @title Video 2: Entropy of different distributions
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1df4y1976g", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="o6nyrx3KH20", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
###Output
Deterministic: 1.00 bits
Uniform: 34.22 bits
Exponential: 477.65 bits
###Markdown
--- Section 3: Calculate entropy of ISI distributions from data*Estimated timing to here from start of tutorial: 25 min* Section 3.1: Computing probabilities from histogram
###Code
# @title Video 3: Probabilities from histogram
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Jk4y1B7cz", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="e2U_-07O9jo", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Coding Exercise 3.1: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
# raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = counts/sum(counts)
return pmf
# Get neuron index
neuron_idx = 283
# Get counts of ISIs from Steinmetz data
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
# Compute pmf
pmf = pmf_from_counts(counts)
# Visualize
plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_2db0a1bc.py)*Example output:* Section 3.2: Calculating entropy from pmf
###Code
# @title Video 4: Calculating entropy from pmf
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1vA411e7Cd", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Xjy-jj-6Oz0", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
Entropy for Neuron 283: 322.14 bits
###Markdown
Interactive Demo 3.2: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Section 4: Reflecting on why models*Estimated timing to here from start of tutorial: 35 min* Think! 3: Reflecting on how modelsPlease discuss the following questions for around 10 minutes with your group:- Have you seen why models before?- Have you ever done one?- Why are why models useful?- When are they possible? Does your field have why models?- What do we learn from constructing them? --- Summary*Estimated timing of tutorial: 45 minutes*
###Code
# @title Video 5: Summary of model types
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1F5411e7ww", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="X4K2RR5qBK8", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Tutorial 3: "Why" models**Week 1, Day 1: Model Types****By Neuromatch Academy**__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael WaskomWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. ___ Tutorial ObjectivesThis is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
#@title Video 1: “Why” models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='OOIDEr1e5Gg', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Section 1: Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = ...
# implement the equation for Shannon entropy (in bits)
h = ...
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Uncomment to test your entropy function
# print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_3dc69011.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Section 2: Information, neurons, and spikes
###Code
#@title Video 2: Entropy of different distributions
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='o6nyrx3KH20', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
#@title Video 3: Probabilities from histogram
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='e2U_-07O9jo', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Exercise 2: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = ...
return pmf
# Uncomment when ready to test your function
# pmf = pmf_from_counts(counts)
# plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_49231923.py)*Example output:* Section 3: Calculate entropy from a PMF
###Code
#@title Video 4: Calculating entropy from pmf
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='Xjy-jj-6Oz0', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
Interactive Demo: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Summary
###Code
#@title Video 5: Summary of model types
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='X4K2RR5qBK8', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Tutorial 3: "Why" models**Week 1, Day 1: Model Types****By Neuromatch Academy**__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael WaskomWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** ___ Tutorial ObjectivesThis is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
# @title Video 1: “Why” models
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="OOIDEr1e5Gg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Section 1: Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = ...
# implement the equation for Shannon entropy (in bits)
h = ...
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Uncomment to test your entropy function
# print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_3dc69011.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Section 2: Information, neurons, and spikes
###Code
# @title Video 2: Entropy of different distributions
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="o6nyrx3KH20", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
# @title Video 3: Probabilities from histogram
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="e2U_-07O9jo", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Exercise 2: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = ...
return pmf
# Uncomment when ready to test your function
# pmf = pmf_from_counts(counts)
# plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_49231923.py)*Example output:* Section 3: Calculate entropy from a PMF
###Code
# @title Video 4: Calculating entropy from pmf
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Xjy-jj-6Oz0", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
Interactive Demo: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Summary
###Code
# @title Video 5: Summary of model types
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="X4K2RR5qBK8", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
NMA Model Types Tutorial 3: "Why" modelsIn this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.Tutorial objectives:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
#@title Video: “Why” models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='fxbBJu258oE', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=fxbBJu258oE
###Markdown
Setup
###Code
#@title Imports
import io
import matplotlib.pyplot as plt
import numpy as np
import requests
import scipy.stats as stats
import ipywidgets as widgets
fig_w, fig_h = (6, 4)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
%matplotlib inline
#@title Helper Functions
def histogram(counts, bins, vlines=(), ax=None, ax_args=None, **kwargs):
"""Plot a step histogram given counts over bins.
TODO:
* Passing the mean directly, but could estimate from the counts and bins.
"""
if ax is None:
_, ax = plt.subplots()
# duplicate the first element of `counts` to match bin edges
counts = np.insert(counts, 0, counts[0])
ax.fill_between(bins, counts, step="pre", alpha=0.4, **kwargs) # area shading
ax.plot(bins, counts, drawstyle="steps", **kwargs) # lines
for x in vlines:
ax.axvline(x, 0, ymax, colors='r', linestyles='dotted') # vertical line
if ax_args is None:
ax_args = {}
# heuristically set max y to leave a bit of room
ymin, ymax = ax_args.get('ylim', [None, None])
if ymax is None:
ymax = np.max(counts)
if ax_args.get('yscale', 'linear') == 'log':
ymax *= 1.5
else:
ymax *= 1.1
if ymin is None:
ymin = 0
ax_args['ylim'] = [ymin, ymax]
ax.set(**ax_args)
ax.autoscale(enable=True, axis='x', tight=True)
def eventplot(event_times, colos='k', ax=None, ax_args=None, **kwargs):
if ax is None:
_, ax = plt.subplots()
p = ax.eventplot(event_times, colors='k', **kwargs)
if ax_args is None:
ax_args = {}
# `eventplot` behaves differently with a single series? just blank the y-axis for now
if len(p) == 1:
ax.yaxis.set_visible(False)
ax.set(**ax_args)
ax.autoscale(enable=True, axis='x', tight=True)
#@title Download Data
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('oops')
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon intoduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_points = 50 # points supporting the distribution
bins = np.linspace(0, 1, n_points + 1)
pmf = np.zeros(n_points)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Note: the middle point of `p` maps to the middle bin of `bins` because of shared indexing
histogram(pmf, bins, ax_args={
'xlabel': "x",
'ylabel': "p(x)",
'xlim': [0, None],
})
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? Exercise: Computing EntropyYour first exercise it to implement a method that computes the entropy of a discrete probability mass function. Remember that we are interested in bits, so be sure to use the correct log function. Also recall that $log(0)$ is undefined and will produce a `nan` (Not a Number) value if evaluated.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
###############################################################
## TODO for students: compute the entropy of the provided PMF #
###############################################################
raise NotImplementedError("Student excercise: implement the equation for entropy")
# Uncomment once the entropy function is complete
# print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_8c67cf88.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = 0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_points)
pmf[2] = 1.0 # arbitrary point has all the mass
histogram(pmf, bins, ax_args={
'title': f'Entropy = {entropy(pmf):.2f} bits',
'xlabel': "x",
'ylabel': "p(x)",
})
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_points)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
histogram(pmf, bins, ax_args={
'title': f'Entropy = {entropy(pmf):.2f} bits',
'xlabel': "x",
'ylabel': "p(x)",
'ylim': [0, 1],
})
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_points) / n_points # [1/N] * N
histogram(pmf, bins, ax_args={
'title': f'Entropy = {entropy(pmf):.2f} bits',
'xlabel': "x",
'ylabel': "p(x)",
'ylim': [0, 1],
})
###Output
_____no_output_____
###Markdown
Here, there are 50 bins and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 bins and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Information, neurons, and spikes Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? Let us consider three hypothetical neurons similar to those we explored in the Steinmetz data that all have the same mean ISIs but with different distributions:1. Deterministic2. Uniform3. ExponentialHow do we expect their entropies to differ?
###Code
n_bins = 50
mean_isi = 0.025
bins = np.linspace(0, 0.25, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros_like(bins[1:])
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros_like(bins[1:])
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, figsize=(18, 5))
histogram(pmf_single, bins, ax=ax1, ax_args={
'title': '(1) Deterministic',
'xlabel': "Inter-spike interval (s)",
'ylabel': "Probability mass",
'ylim': [0, 1.0]
})
histogram(pmf_uniform, bins, ax=ax2, ax_args={
'title': '(2) Uniform',
'xlabel': "Inter-spike interval (s)",
'ylabel': "Probability mass",
'ylim': [0, 0.25]
})
histogram(pmf_exp, bins, ax=ax3, ax_args={
'title': '(3) Exponential',
'xlabel': "Inter-spike interval (s)",
'ylabel': "Probability mass",
'ylim': [0, 0.25]
})
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
###Output
Deterministic: 0.00 bits
Uniform: 3.32 bits
Exponential: 3.77 bits
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed and convert them into discrete probability distributions using the following equation:\begin{align}p(i) = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p(i)$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval.
###Code
#@title Video: Entropy of different distributions
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='hySy-J51vcI', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=hySy-J51vcI
###Markdown
Exercise: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(0, 0.25, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
## TODO for students: compute the probability mass function from ISI counts
###########################################################################
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
# Uncomment once you complete the function
# pmf = pmf_from_counts(counts)
# histogram(pmf, bins, ax_args={
# 'title': f"Neuron {neuron_idx}",
# 'xlabel': "Inter-spike interval (s)",
# 'ylabel': "Probability mass",
# })
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_f3b21477.py)*Example output:*
###Code
#@title Video: Calculating entropy of a probability distribution
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='cyBu1pEGOh4', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=cyBu1pEGOh4
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
Entropy for Neuron 283: 3.36 bits
###Markdown
Data ExplorationWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information.
###Code
#@title Steinmetz Neuron Information Explorer
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(0, 0.25, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
ymax = max(0.2, np.max(pmf))
histogram(pmf, bins, ax_args={
'title': f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits",
'xlabel': "Inter-spike interval (s)",
'ylabel': "Probability mass",
'ylim': [0, ymax]
})
#@title Video: Summary of “why” models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='-eGFd7E_smA', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=-eGFd7E_smA
###Markdown
NMA Model Types Tutorial 3: "Why" modelsIn this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.Tutorial objectives:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
#@title Video: “Why” models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='fxbBJu258oE', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=fxbBJu258oE
###Markdown
Setup
###Code
#@title Imports
import io
import matplotlib.pyplot as plt
import numpy as np
import requests
import scipy.stats as stats
import ipywidgets as widgets
fig_w, fig_h = (6, 4)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
%matplotlib inline
#@title Helper Functions
def histogram(counts, bins, vlines=(), ax=None, ax_args=None, **kwargs):
"""Plot a step histogram given counts over bins.
TODO:
* Passing the mean directly, but could estimate from the counts and bins.
"""
if ax is None:
_, ax = plt.subplots()
# duplicate the first element of `counts` to match bin edges
counts = np.insert(counts, 0, counts[0])
ax.fill_between(bins, counts, step="pre", alpha=0.4, **kwargs) # area shading
ax.plot(bins, counts, drawstyle="steps", **kwargs) # lines
for x in vlines:
ax.axvline(x, 0, ymax, colors='r', linestyles='dotted') # vertical line
if ax_args is None:
ax_args = {}
# heuristically set max y to leave a bit of room
ymin, ymax = ax_args.get('ylim', [None, None])
if ymax is None:
ymax = np.max(counts)
if ax_args.get('yscale', 'linear') == 'log':
ymax *= 1.5
else:
ymax *= 1.1
if ymin is None:
ymin = 0
ax_args['ylim'] = [ymin, ymax]
ax.set(**ax_args)
ax.autoscale(enable=True, axis='x', tight=True)
def eventplot(event_times, colos='k', ax=None, ax_args=None, **kwargs):
if ax is None:
_, ax = plt.subplots()
p = ax.eventplot(event_times, colors='k', **kwargs)
if ax_args is None:
ax_args = {}
# `eventplot` behaves differently with a single series? just blank the y-axis for now
if len(p) == 1:
ax.yaxis.set_visible(False)
ax.set(**ax_args)
ax.autoscale(enable=True, axis='x', tight=True)
#@title Download Data
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('oops')
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon intoduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_points = 50 # points supporting the distribution
bins = np.linspace(0, 1, n_points + 1)
pmf = np.zeros(n_points)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Note: the middle point of `p` maps to the middle bin of `bins` because of shared indexing
histogram(pmf, bins, ax_args={
'xlabel': "x",
'ylabel': "p(x)",
'xlim': [0, None],
})
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? Exercise: Computing EntropyYour first exercise it to implement a method that computes the entropy of a discrete probability mass function. Remember that we are interested in bits, so be sure to use the correct log function. Also recall that $log(0)$ is undefined and will produce a `nan` (Not a Number) value if evaluated.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
###############################################################
## TODO for students: compute the entropy of the provided PMF #
###############################################################
raise NotImplementedError("Student excercise: implement the equation for entropy")
# Uncomment once the entropy function is complete
# print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = 0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_points)
pmf[2] = 1.0 # arbitrary point has all the mass
histogram(pmf, bins, ax_args={
'title': f'Entropy = {entropy(pmf):.2f} bits',
'xlabel': "x",
'ylabel': "p(x)",
})
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_points)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
histogram(pmf, bins, ax_args={
'title': f'Entropy = {entropy(pmf):.2f} bits',
'xlabel': "x",
'ylabel': "p(x)",
'ylim': [0, 1],
})
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_points) / n_points # [1/N] * N
histogram(pmf, bins, ax_args={
'title': f'Entropy = {entropy(pmf):.2f} bits',
'xlabel': "x",
'ylabel': "p(x)",
'ylim': [0, 1],
})
###Output
_____no_output_____
###Markdown
Here, there are 50 bins and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 bins and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Information, neurons, and spikes Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? Let us consider three hypothetical neurons similar to those we explored in the Steinmetz data that all have the same mean ISIs but with different distributions:1. Deterministic2. Uniform3. ExponentialHow do we expect their entropies to differ?
###Code
n_bins = 50
mean_isi = 0.025
bins = np.linspace(0, 0.25, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros_like(bins[1:])
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros_like(bins[1:])
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, figsize=(18, 5))
histogram(pmf_single, bins, ax=ax1, ax_args={
'title': '(1) Deterministic',
'xlabel': "Inter-spike interval (s)",
'ylabel': "Probability mass",
'ylim': [0, 1.0]
})
histogram(pmf_uniform, bins, ax=ax2, ax_args={
'title': '(2) Uniform',
'xlabel': "Inter-spike interval (s)",
'ylabel': "Probability mass",
'ylim': [0, 0.25]
})
histogram(pmf_exp, bins, ax=ax3, ax_args={
'title': '(3) Exponential',
'xlabel': "Inter-spike interval (s)",
'ylabel': "Probability mass",
'ylim': [0, 0.25]
})
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
###Output
Deterministic: 0.00 bits
Uniform: 3.32 bits
Exponential: 3.77 bits
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed and convert them into discrete probability distributions using the following equation:\begin{align}p(i) = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p(i)$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval.
###Code
#@title Video: Entropy of different distributions
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='hySy-J51vcI', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=hySy-J51vcI
###Markdown
Exercise: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(0, 0.25, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
## TODO for students: compute the probability mass function from ISI counts
###########################################################################
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
# Uncomment once you complete the function
# pmf = pmf_from_counts(counts)
# histogram(pmf, bins, ax_args={
# 'title': f"Neuron {neuron_idx}",
# 'xlabel': "Inter-spike interval (s)",
# 'ylabel': "Probability mass",
# })
###Output
_____no_output_____
###Markdown
**Example output:**
###Code
#@title Video: Calculating entropy of a probability distribution
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='cyBu1pEGOh4', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=cyBu1pEGOh4
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
Entropy for Neuron 283: 3.36 bits
###Markdown
Data ExplorationWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information.
###Code
#@title Steinmetz Neuron Information Explorer
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(0, 0.25, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
ymax = max(0.2, np.max(pmf))
histogram(pmf, bins, ax_args={
'title': f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits",
'xlabel': "Inter-spike interval (s)",
'ylabel': "Probability mass",
'ylim': [0, ymax]
})
#@title Video: Summary of “why” models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='-eGFd7E_smA', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=-eGFd7E_smA
###Markdown
Neuromatch Academy: Week 1, Day 1, Tutorial 3 Model Types: "Why" models__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael WaskomWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. ___ Tutorial ObjectivesThis is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
#@title Video 1: “Why” models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='OOIDEr1e5Gg', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Section 1: Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = ...
# implement the equation for Shannon entropy (in bits)
h = ...
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Uncomment to test your entropy function
# print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_3dc69011.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Section 2: Information, neurons, and spikes
###Code
#@title Video 2: Entropy of different distributions
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='o6nyrx3KH20', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
#@title Video 3: Probabilities from histogram
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='e2U_-07O9jo', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Exercise 2: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = ...
return pmf
# Uncomment when ready to test your function
# pmf = pmf_from_counts(counts)
# plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_49231923.py)*Example output:* Section 3: Calculate entropy from a PMF
###Code
#@title Video 4: Calculating entropy from pmf
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='Xjy-jj-6Oz0', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
Interactive Demo: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Summary
###Code
#@title Video 5: Summary of model types
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='X4K2RR5qBK8', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
_____no_output_____
###Markdown
[](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D1_ModelTypes/student/W1D1_Tutorial3.ipynb) Tutorial 3: "Why" models**Week 1, Day 1: Model Types****By Neuromatch Academy**__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael Waskom, Ella BattyWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** ___ Tutorial Objectives*Estimated timing of tutorial: 45 minutes*This is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/6dxwe/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# @title Video 1: “Why” models
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV16t4y1Q7DR", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="OOIDEr1e5Gg", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Setup
###Code
# Imports
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Plotting Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
import requests
r = requests.get('https://osf.io/sy5xt/download')
if r.status_code != 200:
print('Could not download data')
else:
steinmetz_spikes = np.load(io.BytesIO(r.content), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
--- Section 1: Optimization and Information*Remember that the notation section is located after the Summary for quick reference!*Neurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) = -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Bonus Section 1 for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well (e.g. when $b=e$ we call the units *nats*). First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? We will compute this in the next exercise. Coding Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = ...
# implement the equation for Shannon entropy (in bits)
h = ...
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Call entropy function and print result
print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_f07b571c.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$. Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on - the following plot shows a PMF that would also have zero entropy.
###Code
# @markdown Execute this cell to visualize another PMF with zero entropy
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
# @markdown Execute this cell to visualize a PMF with split mass
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is: $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: : $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align} If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
# @markdown Execute this cell to visualize a PMF of uniform distribution
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. --- Section 2: Information, neurons, and spikes*Estimated timing to here from start of tutorial: 20 min*
###Code
# @title Video 2: Entropy of different distributions
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1df4y1976g", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="o6nyrx3KH20", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
###Output
_____no_output_____
###Markdown
--- Section 3: Calculate entropy of ISI distributions from data*Estimated timing to here from start of tutorial: 25 min* Section 3.1: Computing probabilities from histogram
###Code
# @title Video 3: Probabilities from histogram
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1Jk4y1B7cz", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="e2U_-07O9jo", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Coding Exercise 3.1: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = ...
return pmf
# Get neuron index
neuron_idx = 283
# Get counts of ISIs from Steinmetz data
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
# Compute pmf
pmf = pmf_from_counts(counts)
# Visualize
plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_2db0a1bc.py)*Example output:* Section 3.2: Calculating entropy from pmf
###Code
# @title Video 4: Calculating entropy from pmf
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1vA411e7Cd", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Xjy-jj-6Oz0", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
Interactive Demo 3.2: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Section 4: Reflecting on why models*Estimated timing to here from start of tutorial: 35 min* Think! 3: Reflecting on how modelsPlease discuss the following questions for around 10 minutes with your group:- Have you seen why models before?- Have you ever done one?- Why are why models useful?- When are they possible? Does your field have why models?- What do we learn from constructing them? --- Summary*Estimated timing of tutorial: 45 minutes*
###Code
# @title Video 5: Summary of model types
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1F5411e7ww", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="X4K2RR5qBK8", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 1, Tutorial 3 Model Types: "Why" models__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael WaskomWe would like to acknowledge [Steinmetz _et al._ (2019)](https://www.nature.com/articles/s41586-019-1787-x) for sharing their data, a subset of which is used here. ___ Tutorial ObjectivesThis is tutorial 3 of a 3-part series on different flavors of models used to understand neural data. In parts 1 and 2 we explored mechanisms that would produce the data. In this tutorial we will explore models and techniques that can potentially explain *why* the spiking data we have observed is produced the way it is.To understand why different spiking behaviors may be beneficial, we will learn about the concept of entropy. Specifically, we will:- Write code to compute formula for entropy, a measure of information- Compute the entropy of a number of toy distributions- Compute the entropy of spiking activity from the Steinmetz dataset
###Code
#@title Video 1: “Why” models
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV16t4y1Q7DR', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV16t4y1Q7DR
###Markdown
Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("/share/dataset/COMMON/nma.mplstyle.txt")
#@title Helper Functions
def plot_pmf(pmf,isi_range):
"""Plot the probability mass function."""
ymax = max(0.2, 1.05 * np.max(pmf))
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.title(f"Neuron {neuron_idx}")
plt.xlabel("Inter-spike interval (s)")
plt.ylabel("Probability mass")
plt.xlim(isi_range);
plt.ylim([0, ymax])
#@title Download Data
import io
with open('/share/dataset/W1D1/nma_steinmetz_spiketimes_cori_20161214_f32.npz', 'rb') as f:
r = f.read()
steinmetz_spikes = np.load(io.BytesIO(r), allow_pickle=True)['spike_times']
###Output
_____no_output_____
###Markdown
Section 1: Optimization and InformationNeurons can only fire so often in a fixed period of time, as the act of emitting a spike consumes energy that is depleted and must eventually be replenished. To communicate effectively for downstream computation, the neuron would need to make good use of its limited spiking capability. This becomes an optimization problem: What is the optimal way for a neuron to fire in order to maximize its ability to communicate information?In order to explore this question, we first need to have a quantifiable measure for information. Shannon introduced the concept of entropy to do just that, and defined it as\begin{align} H_b(X) &= -\sum_{x\in X} p(x) \log_b p(x)\end{align}where $H$ is entropy measured in units of base $b$ and $p(x)$ is the probability of observing the event $x$ from the set of all possible events in $X$. See the Appendix for a more detailed look at how this equation was derived.The most common base of measuring entropy is $b=2$, so we often talk about *bits* of information, though other bases are used as well e.g. when $b=e$ we call the units *nats*. First, let's explore how entropy changes between some simple discrete probability distributions. In the rest of this tutorial we will refer to these as probability mass functions (PMF), where $p(x_i)$ equals the $i^{th}$ value in an array, and mass refers to how much of the distribution is contained at that value.For our first PMF, we will choose one where all of the probability mass is located in the middle of the distribution.
###Code
n_bins = 50 # number of points supporting the distribution
x_range = (0, 1) # will be subdivided evenly into bins corresponding to points
bins = np.linspace(*x_range, n_bins + 1) # bin edges
pmf = np.zeros(n_bins)
pmf[len(pmf) // 2] = 1.0 # middle point has all the mass
# Since we already have a PMF, rather than un-binned samples, `plt.hist` is not
# suitable. Instead, we directly plot the PMF as a step function to visualize
# the histogram:
pmf_ = np.insert(pmf, 0, pmf[0]) # this is necessary to align plot steps with bin edges
plt.plot(bins, pmf_, drawstyle="steps")
# `fill_between` provides area shading
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
If we were to draw a sample from this distribution, we know exactly what we would get every time. Distributions where all the mass is concentrated on a single event are known as *deterministic*.How much entropy is contained in a deterministic distribution? Exercise 1: Computing EntropyYour first exercise is to implement a method that computes the entropy of a discrete probability distribution, given its mass function. Remember that we are interested in entropy in units of _bits_, so be sure to use the correct log function. Recall that $\log(0)$ is undefined. When evaluated at $0$, NumPy log functions (such as `np.log2`) return `np.nan` ("Not a Number"). By convention, these undefined terms— which correspond to points in the distribution with zero mass—are excluded from the sum that computes the entropy.
###Code
def entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits.
This is a measure of information in the distribution. For a totally
deterministic distribution, where samples are always found in the same bin,
then samples from the distribution give no more information and the entropy
is 0.
For now this assumes `pmf` arrives as a well-formed distribution (that is,
`np.sum(pmf)==1` and `not np.any(pmf < 0)`)
Args:
pmf (np.ndarray): The probability mass function for a discrete distribution
represented as an array of probabilities.
Returns:
h (number): The entropy of the distribution in `pmf`.
"""
############################################################################
# Exercise for students: compute the entropy of the provided PMF
# 1. Exclude the points in the distribution with no mass (where `pmf==0`).
# Hint: this is equivalent to including only the points with `pmf>0`.
# 2. Implement the equation for Shannon entropy (in bits).
# When ready to test, comment or remove the next line
raise NotImplementedError("Excercise: implement the equation for entropy")
############################################################################
# reduce to non-zero entries to avoid an error from log2(0)
pmf = ...
# implement the equation for Shannon entropy (in bits)
h = ...
# return the absolute value (avoids getting a -0 result)
return np.abs(h)
# Uncomment to test your entropy function
# print(f"{entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_55c07dc8.py) We expect zero surprise from a deterministic distribution. If we had done this calculation by hand, it would simply be $-1\log_2 1 = -0=0$ Note that changing the location of the peak (i.e. the point and bin on which all the mass rests) doesn't alter the entropy. The entropy is about how predictable a sample is with respect to a distribution. A single peak is deterministic regardless of which point it sits on.
###Code
pmf = np.zeros(n_bins)
pmf[2] = 1.0 # arbitrary point has all the mass
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
What about a distribution with mass split equally between two points?
###Code
pmf = np.zeros(n_bins)
pmf[len(pmf) // 3] = 0.5
pmf[2 * len(pmf) // 3] = 0.5
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range)
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, the entropy calculation is $-(0.5 \log_2 0.5 + 0.5\log_2 0.5)=1$ There is 1 bit of entropy. This means that before we take a random sample, there is 1 bit of uncertainty about which point in the distribution the sample will fall on: it will either be the first peak or the second one. Likewise, if we make one of the peaks taller (i.e. its point holds more of the probability mass) and the other one shorter, the entropy will decrease because of the increased certainty that the sample will fall on one point and not the other: $-(0.2 \log_2 0.2 + 0.8\log_2 0.8)\approx 0.72$ Try changing the definition of the number and weighting of peaks, and see how the entropy varies. If we split the probability mass among even more points, the entropy continues to increase. Let's derive the general form for $N$ points of equal mass, where $p_i=p=1/N$:\begin{align} -\sum_i p_i \log_b p_i&= -\sum_i^N \frac{1}{N} \log_b \frac{1}{N}\\ &= -\log_b \frac{1}{N} \\ &= \log_b N\end{align}$$$$ If we have $N$ discrete points, the _uniform distribution_ (where all points have equal mass) is the distribution with the highest entropy: $\log_b N$. This upper bound on entropy is useful when considering binning strategies, as any estimate of entropy over $N$ discrete points (or bins) must be in the interval $[0, \log_b N]$.
###Code
pmf = np.ones(n_bins) / n_bins # [1/N] * N
pmf_ = np.insert(pmf, 0, pmf[0])
plt.plot(bins, pmf_, drawstyle="steps")
plt.fill_between(bins, pmf_, step="pre", alpha=0.4)
plt.xlabel("x")
plt.ylabel("p(x)")
plt.xlim(x_range);
plt.ylim(0, 1);
###Output
_____no_output_____
###Markdown
Here, there are 50 points and the entropy of the uniform distribution is $\log_2 50\approx 5.64$. If we construct _any_ discrete distribution $X$ over 50 points (or bins) and calculate an entropy of $H_2(X)>\log_2 50$, something must be wrong with our implementation of the discrete entropy computation. Section 2: Information, neurons, and spikes
###Code
#@title Video 2: Entropy of different distributions
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1df4y1976g', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1df4y1976g
###Markdown
Recall the discussion of spike times and inter-spike intervals (ISIs) from Tutorial 1. What does the information content (or distributional entropy) of these measures say about our theory of nervous systems? We'll consider three hypothetical neurons that all have the same mean ISI, but with different distributions:1. Deterministic2. Uniform3. ExponentialFixing the mean of the ISI distribution is equivalent to fixing its inverse: the neuron's mean firing rate. If a neuron has a fixed energy budget and each of its spikes has the same energy cost, then by fixing the mean firing rate, we are normalizing for energy expenditure. This provides a basis for comparing the entropy of different ISI distributions. In other words: if our neuron has a fixed budget, what ISI distribution should it express (all else being equal) to maximize the information content of its outputs?Let's construct our three distributions and see how their entropies differ.
###Code
n_bins = 50
mean_isi = 0.025
isi_range = (0, 0.25)
bins = np.linspace(*isi_range, n_bins + 1)
mean_idx = np.searchsorted(bins, mean_isi)
# 1. all mass concentrated on the ISI mean
pmf_single = np.zeros(n_bins)
pmf_single[mean_idx] = 1.0
# 2. mass uniformly distributed about the ISI mean
pmf_uniform = np.zeros(n_bins)
pmf_uniform[0:2*mean_idx] = 1 / (2 * mean_idx)
# 3. mass exponentially distributed about the ISI mean
pmf_exp = stats.expon.pdf(bins[1:], scale=mean_isi)
pmf_exp /= np.sum(pmf_exp)
#@title
#@markdown Run this cell to plot the three PMFs
fig, axes = plt.subplots(ncols=3, figsize=(18, 5))
dists = [# (subplot title, pmf, ylim)
("(1) Deterministic", pmf_single, (0, 1.05)),
("(1) Uniform", pmf_uniform, (0, 1.05)),
("(1) Exponential", pmf_exp, (0, 1.05))]
for ax, (label, pmf_, ylim) in zip(axes, dists):
pmf_ = np.insert(pmf_, 0, pmf_[0])
ax.plot(bins, pmf_, drawstyle="steps")
ax.fill_between(bins, pmf_, step="pre", alpha=0.4)
ax.set_title(label)
ax.set_xlabel("Inter-spike interval (s)")
ax.set_ylabel("Probability mass")
ax.set_xlim(isi_range);
ax.set_ylim(ylim);
print(
f"Deterministic: {entropy(pmf_single):.2f} bits",
f"Uniform: {entropy(pmf_uniform):.2f} bits",
f"Exponential: {entropy(pmf_exp):.2f} bits",
sep="\n",
)
#@title Video 3: Probabilities from histogram
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1Jk4y1B7cz', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1Jk4y1B7cz
###Markdown
In the previous example we created the PMFs by hand to illustrate idealized scenarios. How would we compute them from data recorded from actual neurons?One way is to convert the ISI histograms we've previously computed into discrete probability distributions using the following equation:\begin{align}p_i = \frac{n_i}{\sum\nolimits_{i}n_i}\end{align}where $p_i$ is the probability of an ISI falling within a particular interval $i$ and $n_i$ is the count of how many ISIs were observed in that interval. Exercise 2: Probabilty Mass FunctionYour second exercise is to implement a method that will produce a probability mass function from an array of ISI bin counts.To verify your solution, we will compute the probability distribution of ISIs from real neural data taken from the Steinmetz dataset.
###Code
neuron_idx = 283
isi = np.diff(steinmetz_spikes[neuron_idx])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
def pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
###########################################################################
# Exercise: Compute the PMF. Remove the next line to test your function
raise NotImplementedError("Student excercise: compute the PMF from ISI counts")
###########################################################################
pmf = ...
return pmf
# Uncomment when ready to test your function
# pmf = pmf_from_counts(counts)
# plot_pmf(pmf,isi_range)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D1_ModelTypes/solutions/W1D1_Tutorial3_Solution_49231923.py)*Example output:* Section 3: Calculate entropy from a PMF
###Code
#@title Video 4: Calculating entropy from pmf
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1vA411e7Cd', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1vA411e7Cd
###Markdown
Now that we have the probability distribution for the actual neuron spiking activity, we can calculate its entropy.
###Code
print(f"Entropy for Neuron {neuron_idx}: {entropy(pmf):.2f} bits")
###Output
Entropy for Neuron 283: 3.36 bits
###Markdown
Interactive Demo: Entropy of neuronsWe can combine the above distribution plot and entropy calculation with an interactive widget to explore how the different neurons in the dataset vary in spiking activity and relative information. Note that the mean firing rate across neurons is not fixed, so some neurons with a uniform ISI distribution may have higher entropy than neurons with a more exponential distribution.
###Code
#@title
#@markdown **Run the cell** to enable the sliders.
def _pmf_from_counts(counts):
"""Given counts, normalize by the total to estimate probabilities."""
pmf = counts / np.sum(counts)
return pmf
def _entropy(pmf):
"""Given a discrete distribution, return the Shannon entropy in bits."""
# remove non-zero entries to avoid an error from log2(0)
pmf = pmf[pmf > 0]
h = -np.sum(pmf * np.log2(pmf))
# absolute value applied to avoid getting a -0 result
return np.abs(h)
@widgets.interact(neuron=widgets.IntSlider(0, min=0, max=(len(steinmetz_spikes)-1)))
def steinmetz_pmf(neuron):
""" Given a neuron from the Steinmetz data, compute its PMF and entropy """
isi = np.diff(steinmetz_spikes[neuron])
bins = np.linspace(*isi_range, n_bins + 1)
counts, _ = np.histogram(isi, bins)
pmf = _pmf_from_counts(counts)
plot_pmf(pmf,isi_range)
plt.title(f"Neuron {neuron}: H = {_entropy(pmf):.2f} bits")
###Output
_____no_output_____
###Markdown
--- Summary
###Code
#@title Video 5: Summary of model types
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1F5411e7ww', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1F5411e7ww
|
.ipynb_checkpoints/PyMC3_Chris-checkpoint.ipynb | ###Markdown
PyStan
###Code
sns.set() # Nice plot aesthetic
np.random.seed(101)
model = """
data {
int<lower=0> N;
vector[N] x;
vector[N] y;
}
parameters {
real alpha;
real beta;
real<lower=0> sigma;
}
model {
alpha ~ normal(alpha, 3);
y ~ normal(alpha + beta * x, sigma);
}
"""
# Parameters to be inferred
alpha = 4.0
beta = 0.5
sigma = 1.0
# Generate and plot data
x = 10 * np.random.rand(100)
y = alpha + beta * x
y = np.random.normal(y, scale=sigma)
fig, ax = plt.subplots()
sns.scatterplot(x, y)
plt.xlabel('x')
plt.ylabel('y')
plt.title('Scatter Plot of Data')
fig.show()
my_data = {'N': len(x), 'x':x, 'y':y}
# compile the model
sm = pystan.StanModel(model_code = model)
# Train the model and generate samples
fit = sm.sampling(data=my_data, iter=1000, chains=4, warmup=500,
thin=1, seed=101)
fit
summary_dict = fit.summary()
df = pd.DataFrame(summary_dict['summary'],
columns=summary_dict['summary_colnames'],
index=summary_dict['summary_rownames'])
alpha_mean, beta_mean = df['mean']['alpha'], df['mean']['beta']
# Extracting traces
alpha = fit['alpha']
beta = fit['beta']
sigma = fit['sigma']
lp = fit['lp__']
df
def plot_trace(param, param_name='parameter'):
"""Plot the trace and posterior of a parameter."""
# Summary statistics
mean = np.mean(param)
median = np.median(param)
cred_min, cred_max = np.percentile(param, 2.5), np.percentile(param, 97.5)
# Plotting
plt.subplot(2,1,1)
plt.plot(param)
plt.xlabel('samples')
plt.ylabel(param_name)
plt.axhline(mean, color='r', lw=2, linestyle='--')
plt.axhline(median, color='c', lw=2, linestyle='--')
plt.axhline(cred_min, linestyle=':', color='k', alpha=0.2)
plt.axhline(cred_max, linestyle=':', color='k', alpha=0.2)
plt.title('Trace and Posterior Distribution for {}'.format(param_name))
plt.subplot(2,1,2)
plt.hist(param, 30, density=True); sns.kdeplot(param, shade=True)
plt.xlabel(param_name)
plt.ylabel('density')
plt.axvline(mean, color='r', lw=2, linestyle='--',label='mean')
plt.axvline(median, color='c', lw=2, linestyle='--',label='median')
plt.axvline(cred_min, linestyle=':', color='k', alpha=0.2, label='95% CI')
plt.axvline(cred_max, linestyle=':', color='k', alpha=0.2)
plt.gcf().tight_layout()
plt.legend()
plot_trace(alpha, 'alpha')
plot_trace(beta, 'beta')
plot_trace(sigma, 'sigma')
###Output
_____no_output_____
###Markdown
PyMC3 Dirichlet Process Stick Breaking
###Code
def stick_breaking(beta):
portion_remaining = tt.concatenate([[1], tt.extra_ops.cumprod(1 - beta)[:-1]])
return beta * portion_remaining
dp = pm.Model()
with dp:
alpha = 10*pm.Gamma('alpha', 1, 1)
beta = pm.Beta('beta', 1, alpha, shape=5)
pi = pm.Deterministic('pi', stick_breaking(beta))
with dp:
trace = pm.sample(8, tune=10, init='advi', random_seed=1)
trace['pi']
trace['alpha']
###Output
_____no_output_____
###Markdown
DP Mixture Model with MCMC and ADVI
###Code
# import some data to play with
iris = datasets.load_iris()
iris['data']
# faith = pd.read_csv('faithful.csv')
# faith.waiting.values
def stick_breaking(beta):
portion_remaining = tt.concatenate([[1], tt.extra_ops.cumprod(1 - beta)[:-1]])
return beta * portion_remaining
# data = faith.waiting.values
data = iris['data'][:,2]
import seaborn as sns
sns.distplot(data)
K = 30
with pm.Model() as model:
alpha = pm.Gamma('alpha', 1., 1.)
beta = pm.Beta('beta', 1., alpha, shape=K)
w = pm.Deterministic('w', stick_breaking(beta))
tau = pm.Gamma('tau', 1., 1., shape=K)
lambda_ = pm.Uniform('lambda', 0, 5, shape=K)
mu = pm.Normal('mu', 0, tau=lambda_ * tau, shape=K)
obs = pm.NormalMixture('obs', w, mu, tau=lambda_ * tau,
observed=data)
###Output
_____no_output_____
###Markdown
MCMC
###Code
SEED = 1
with model:
step_metro = pm.step_methods.metropolis.Metropolis()
step_hmc = pm.step_methods.hmc.hmc.HamiltonianMC()
step_nuts = pm.step_methods.hmc.nuts.NUTS()
step = step_nuts
trace = pm.sample(200, step, random_seed=SEED, init='advi')
pip install arviz
pm.traceplot(trace, var_names=['alpha']);
pm.traceplot(trace, var_names=['w'])
pm.traceplot(trace, var_names=['mu'])
fig, ax = plt.subplots(figsize=(8, 6))
plot_w = np.arange(K) + 1
ax.bar(plot_w - 0.5, trace['w'].mean(axis=0), width=1., lw=0);
ax.set_xlim(0.5, K);
ax.set_xlabel('Component');
ax.set_ylabel('Posterior expected mixture weight');
###Output
_____no_output_____
###Markdown
ADVI
###Code
η = .1
s = shared(η)
def reduce_rate(a, h, i):
s.set_value(η/((i/minibatch_size)+1)**.7)
# we have sparse dataset. It's better to have dence batch so that all words accure there
minibatch_size = 128
# defining minibatch
# doc_t_minibatch = pm.Minibatch(docs_tr.toarray(), minibatch_size)
# doc_t = shared(docs_tr.toarray()[:minibatch_size])
# local_RVs = OrderedDict([alpha, beta, mu])
with model:
# approx = pm.MeanField(local_rv=local_RVs)
approx = pm.MeanField()
approx.scale_cost_to_minibatch = False
inference = pm.KLqp(approx)
inference.fit(10000, callbacks=[reduce_rate],
obj_optimizer=pm.sgd(learning_rate=s))
# more_obj_params=encoder_params,
# total_grad_norm_constraint=200,
# more_replacements={doc_t: doc_t_minibatch})
with model:
%time approx = pm.fit(n=4500, obj_optimizer=pm.adagrad(learning_rate=1e-1))
means = approx.bij.rmap(approx.mean.eval())
cov = approx.cov.eval()
sds = approx.bij.rmap(np.diag(cov)**.5)
means.keys()
plt.plot(means['mu'])
plt.plot(means['tau_log__'])
plt.plot(cov)
###Output
_____no_output_____
###Markdown
Text Data - word2vec embeddings Loading Data
###Code
# The number of words in the vocabulary
n_words = 50
print("Loading dataset...")
t0 = time()
dataset = fetch_20newsgroups(shuffle=True, random_state=1,
remove=('headers', 'footers', 'quotes'))
data_samples = dataset.data
print("done in %0.3fs." % (time() - t0))
# Use tf (raw term count) features for LDA.
print("Extracting tf features for LDA...")
tf_vectorizer = CountVectorizer(max_df=0.95, min_df=2,
max_features=n_words,
# If not None, build a vocabulary that
# only consider the top max_features
# ordered by term frequency across the corpus.
stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(data_samples)
feature_names = tf_vectorizer.get_feature_names()
print("done in %0.3fs." % (time() - t0))
tf.toarray()[0:5].shape
data_samples[0:5]
feature_names
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
import nltk
nltk.download('punkt')
nltk.download('stopwords')
'aa,fsad' + 'fadsfa'
# Cleaing the text
processed_article = ''
for paragraph in data_samples:
processed_article += paragraph
processed_article += ' '
processed_article = re.sub('[^a-zA-Z]', ' ', processed_article )
processed_article = re.sub(r'\s+', ' ', processed_article)
# Preparing the dataset
all_sentences = nltk.sent_tokenize(processed_article)
all_words = [nltk.word_tokenize(sent) for sent in all_sentences]
# Removing Stop Words
from nltk.corpus import stopwords
for i in range(len(all_words)):
all_words[i] = [w for w in all_words[i] if w not in stopwords.words('english')]
from gensim.models import Word2Vec
word2vec = Word2Vec(all_words, size=5, min_count=1)
vocabulary = word2vec.wv.vocab
print(vocabulary)
word2vec.wv['try'].shape
word2vec.wv['try']
word2vec.wv['try']
###Output
_____no_output_____
###Markdown
Multivariate Normal Mixture HDP
###Code
from sklearn import datasets
def stick_breaking(beta):
portion_remaining = tt.concatenate([[1], tt.extra_ops.cumprod(1 - beta)[:-1]])
return beta * portion_remaining
# import some data to play with
iris = datasets.load_iris()
iris['data']
# data = faith.waiting.values
data = iris['data'][:,1]
dim = np.shape(data)[0]
dim
K_topics = 50
M_topics_per_doc = 20
atoms = np.random.normal(0, .01, K)
# Custom distribution in PyMC3
def logp_G0(atoms, w, value):
idx = np.where(atoms == value)[0] # Example return value: (array([], dtype=int32),)
if len(idx) == 0:
return 0
return log(w[idx[0]])
# def random_G)()
with pm.Model() as hdp_model:
# Topics DP
alpha_0 = pm.Gamma('alpha_0', 1., 1.)
beta_0 = pm.Beta('beta_0', 1., alpha, shape=K)
w_0 = pm.Deterministic('w_0', stick_breaking(beta))
# Get samples from normal
tau_0 = pm.Gamma('tau_0', 1., 1., shape=K)
lambda_0 = pm.Uniform('lambda_0', 0, 5, shape=K)
atoms = pm.Normal('atoms', np.zeros(K), tau=tau*lambda_, shape=K) # Should this be a RV?
# Need to get samples from G0 --> these become the mu's for the final dist
# sample_theta = np.random.choice(atoms, M_topics_per_doc, p=w_0)
# https://discourse.pymc.io/t/multivariatre-categorical-variable-with-different-values/1008/2
tmp = pm.Categorical('tmp', w_0, shape=M_topics_per_doc)
sample_theta = pm.Deterministic('shared_theta', atoms[tmp])
# Want G0 to be atoms in second level of HDP
# G0 = pm.DensityDist('G0', logp_G0, observed={'atoms':atoms, 'w':w_0, 'value': sample_theta})
# mu = pm.Normal('mu', 0, tau=lambda_ * tau, shape=K) # I don't think we want this to be a RV
# obs = pm.NormalMixture('obs', w, mu, tau=lambda_ * tau,
# observed=data)
tau_j = pm.Gamma('tau_j', 1., 1., shape=M_topics_per_doc)
lambda_j = pm.Uniform('lambda_j', 0, 5, shape=M_topics_per_doc)
# Doc DP
alpha_j = pm.Gamma('alpha_j', 1., 1.) # We can choose to use the same alpha_0 as above
beta_j = pm.Beta('beta_j', 1., alpha, shape=M_topics_per_doc) # Same comment as above
w_j = pm.Deterministic('w_j', stick_breaking(beta_j))
obs = pm.NormalMixture('obs', w_j, sample_theta, tau=lambda_j * tau_j, observed=data)
# cov = np.array([[1., 0.5], [0.5, 2]])
# mu = np.zeros(2)
# vals = pm.MvNormal('vals', mu=mu, cov=cov, shape=(5, 2), observed=data)
print(hdp_model.basic_RVs)
SEED=1
with hdp_model:
# step_metro = pm.step_methods.metropolis.Metropolis()
# step_hmc = pm.step_methods.hmc.hmc.HamiltonianMC()
# step_nuts = pm.step_methods.hmc.nuts.NUTS()
# step = step_metro
trace = pm.sample(5000, random_seed=SEED, init='advi', exception_verbosity='high')
with hdp_model:
approx = pm.fit(n=4500, obj_optimizer=pm.adagrad(learning_rate=1e-1))
###Output
_____no_output_____ |
handsOn_lecture15_exp_families_bayesian_networks/handsOn_lecture15_exp_families_bayesian_networks.ipynb | ###Markdown
$$ \LaTeX \text{ command declarations here.}\newcommand{\R}{\mathbb{R}}\renewcommand{\vec}[1]{\mathbf{1}}\newcommand{\X}{\mathcal{X}}\newcommand{\D}{\mathcal{D}}\newcommand{\G}{\mathcal{G}}\newcommand{\Parents}{\mathrm{Parents}}\newcommand{\NonDesc}{\mathrm{NonDesc}}\newcommand{\I}{\mathcal{I}}$$ Exponential Family Distributions**DEF:** $p(x | \theta)$ has **exponential family form** if:$$\begin{align}p(x | \theta)&= \frac{1}{Z(\theta)} h(x) \exp\left[ \eta(\theta)^T \phi(x) \right] \\&= h(x) \exp\left[ \eta(\theta)^T \phi(x) - A(\theta) \right]\end{align}$$- $Z(\theta)$ is the **partition function** for normalization- $A(\theta) = \log Z(\theta)$ is the **log partition function**- $\phi(x) \in \R^d$ is a vector of **sufficient statistics**- $\eta(\theta)$ maps $\theta$ to a set of **natural parameters**- $h(x)$ is a scaling constant, usually $h(x)=1$ Problem: Bernoulli in exponential family formAs mentioned in lecture, the *Bernoulli* distribution can be described in expontial family form:$$\begin{align}\mathrm{Ber}(x | \mu)&= \mu^x (1-\mu)^{1-x} \\ &= (1-\mu) \exp\left[ x \log\frac{\mu}{1-\mu} \right]\end{align}$$This gives **natural parameters** $\eta = \log \frac{\mu}{1-\mu}$.**Problem**: What is the log partition function $A(\eta)$? Problem: Guassian Distribution in exp family formRecall that the gaussian distribution can be written as$$p(x \mid \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi \sigma^2}} \exp\left(-\frac{(\mu-x)^2}{2 \sigma^2} \right)$$**Problem**: Write this in exponential family form. Questions:1. What is $\phi(x)$?1. What is the parameterization $\eta(\mu,\sigma^2)$?1. What is the log-partition $A(\eta)$? Magic of the log partition functionDerivatives of the **log-partition function** $A(\theta)$ yield **cumulants** of the sufficient statistics (assume $h(x)=1$ and $\eta(\theta) = \theta$):- $\nabla_\theta A(\theta) = E_{x}[\phi(x)]$- $\nabla^2_\theta A(\theta) = \text{Cov}[ \phi(x) ]$**Problem**: Prove the first statement using calculus.*Hint*: While this fact isn't always true, in this case you can use the following:$$\nabla_\theta \left( \int f(x,\theta) dx \right) = \int \nabla_\theta f(x,\theta) dx$$ Problem: Conditional independence in PGMs**Which of the following are true?**:1. $P(C,B|A) = P(C|A) P(B |A)$2. $P(C,B) = P(C) P(B)$Prove your answer! Problem: Conditional independence in PGMs**Which of the following are true?**:1. $P(C,B|A) = P(C|B,A)$2. $P(C,B) = P(C) P(B)$Prove your answer! Bayesian NetworksLet's consider a bayesian network representing different factors that determine whether you make it to class on time: Exercise: express joint distribution in terms of product of factorsExpress the joint distribution of this network using the chain rule.P(W,R,T,S,B,C) = ...*your answer goes here*
###Code
%%html
<style>
div.probs table {
float:left;
margin-left: 1em;
margin-top: 1em;
}
div.probs br.clear {
clear:both;
}
</style>
###Output
_____no_output_____ |
ccmi_2019_03_06/.ipynb_checkpoints/2019-03-06_05-02 UCSF GSEA 2-checkpoint.ipynb | ###Markdown
Gene Set Enrichment Analysis (GSEA) Gene Set Enrichment Analysis (GSEA) identifies gene sets that are up- or down-regulated between two conditions. The GSEA method can be summarized as:1. Take gene expression data from two different types of samples (e.g., treated vs non-treated) and rank all genes according to their degree of differential expression across the groups.2. Take a group of genes of interest (e.g., pathway, locus, etc.) and determine whether they are differentially expressed as a set (enriched) within the ranked gene expression data.3. Determine the significance of the enrichment analysis score via a permutation test: randomly swap the gene-set labels of the data and repeat the test many times. Sign in to GenePattern If you haven&39;t yet logged in, enter your credentials into the cell below and click Login:
###Code
# Requires GenePattern Notebook: pip install genepattern-notebook
import gp
import genepattern
# Username and password removed for security reasons.
genepattern.display(genepattern.session.register("https://cloud.genepattern.org/gp", "", ""))
###Output
_____no_output_____
###Markdown
Workshop data and GSEA parametersWe will run GSEA on the breast cancer and normal samples provided previously in the workshop to determine which gene sets are coordinately up-and downregulated. This dataset will require us to change one of the default parameters:collapse dataset. GSEA has a collapse dataset parameter. This is used when the dataset contains accession numbers or other IDs that may contain multiple representatives per gene. It is used most frequently for microarray data, to collapse array probes to gene names. Because the dataset we have provided already has gene names associated with it, we will choose not to collapse it. Note that most gene sets will be collections of gene names (in theory they could be collections of other kinds of IDs but in practice this is not commonplace). InstructionsFor the expression dataset parameter, click and drag BRCA_HUGO_symbols.preprocessed.gct into the "Enter Path or URL" text box For the gene sets database parameter, the c2.all.v6.0.symbols.gmt (Curated gene sets) file has been pre-populated.For the permutation type parameter, select gene_setFor the phenotype labels parameter, click and drag BRCA_HUGO_symbols.preprocessed.cls into the "Enter Path or URL" text box Set the collapse dataset parameter to falseClick the button Run on the analysis below.
###Code
gsea_task = gp.GPTask(genepattern.get_session(0), 'urn:lsid:broad.mit.edu:cancer.software.genepattern.module.analysis:00072')
gsea_job_spec = gsea_task.make_job_spec()
gsea_job_spec.set_parameter("expression.dataset", "")
gsea_job_spec.set_parameter("gene.sets.database", "")
gsea_job_spec.set_parameter("gene.sets.database.file", "")
gsea_job_spec.set_parameter("number.of.permutations", "1000")
gsea_job_spec.set_parameter("phenotype.labels", "")
gsea_job_spec.set_parameter("target.profile", "")
gsea_job_spec.set_parameter("collapse.dataset", "false")
gsea_job_spec.set_parameter("permutation.type", "gene_set")
gsea_job_spec.set_parameter("chip.platform.file", "")
gsea_job_spec.set_parameter("scoring.scheme", "weighted")
gsea_job_spec.set_parameter("metric.for.ranking.genes", "Signal2Noise")
gsea_job_spec.set_parameter("gene.list.sorting.mode", "real")
gsea_job_spec.set_parameter("gene.list.ordering.mode", "descending")
gsea_job_spec.set_parameter("max.gene.set.size", "500")
gsea_job_spec.set_parameter("min.gene.set.size", "15")
gsea_job_spec.set_parameter("collapsing.mode.for.probe.sets.with.more.than.one.match", "Max_probe")
gsea_job_spec.set_parameter("normalization.mode", "meandiv")
gsea_job_spec.set_parameter("randomization.mode", "no_balance")
gsea_job_spec.set_parameter("omit.features.with.no.symbol.match", "true")
gsea_job_spec.set_parameter("make.detailed.gene.set.report", "true")
gsea_job_spec.set_parameter("median.for.class.metrics", "false")
gsea_job_spec.set_parameter("number.of.markers", "100")
gsea_job_spec.set_parameter("plot.graphs.for.the.top.sets.of.each.phenotype", "20")
gsea_job_spec.set_parameter("random.seed", "timestamp")
gsea_job_spec.set_parameter("save.random.ranked.lists", "false")
gsea_job_spec.set_parameter("output.file.name", "<expression.dataset_basename>.zip")
genepattern.display(gsea_task)
###Output
_____no_output_____ |
acceleration/multi_gpu_test.ipynb | ###Markdown
Multi GPU Test[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/acceleration/multi_gpu_test.ipynb) Setup environment
###Code
%pip install -q "monai[ignite]"
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from monai.config import print_config
from monai.engines import create_multigpu_supervised_trainer
from monai.networks.nets import UNet
print_config()
###Output
MONAI version: 0.4.0
Numpy version: 1.19.1
Pytorch version: 1.7.0a0+7036e91
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0563a4467fa602feca92d91c7f47261868d171a1
Optional dependencies:
Pytorch Ignite version: 0.4.2
Nibabel version: 3.2.1
scikit-image version: 0.15.0
Pillow version: 8.0.1
Tensorboard version: 2.2.0
gdown version: 3.12.2
TorchVision version: 0.8.0a0
ITK version: 5.1.2
tqdm version: 4.54.1
lmdb version: 1.0.0
psutil version: 5.7.2
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Test GPUs
###Code
lr = 1e-3
device = torch.device("cuda:0")
net = UNet(
dimensions=2,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
).to(device)
def fake_loss(y_pred, y):
return (y_pred[0] + y).sum()
def fake_data_stream():
while True:
yield torch.rand((10, 1, 64, 64)), torch.rand((10, 1, 64, 64))
###Output
_____no_output_____
###Markdown
1 GPU
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [device])
trainer.run(fake_data_stream(), 2, 2)
###Output
_____no_output_____
###Markdown
all GPUs
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, None)
trainer.run(fake_data_stream(), 2, 2)
###Output
_____no_output_____
###Markdown
CPU
###Code
net = net.to(torch.device("cpu:0"))
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [])
trainer.run(fake_data_stream(), 2, 2)
###Output
_____no_output_____
###Markdown
Multi GPU Test[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/acceleration/multi_gpu_test.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[ignite]"
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from monai.config import print_config
from monai.engines import create_multigpu_supervised_trainer
from monai.networks.nets import UNet
print_config()
###Output
MONAI version: 0.4.0
Numpy version: 1.19.1
Pytorch version: 1.7.0a0+7036e91
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0563a4467fa602feca92d91c7f47261868d171a1
Optional dependencies:
Pytorch Ignite version: 0.4.2
Nibabel version: 3.2.1
scikit-image version: 0.15.0
Pillow version: 8.0.1
Tensorboard version: 2.2.0
gdown version: 3.12.2
TorchVision version: 0.8.0a0
ITK version: 5.1.2
tqdm version: 4.54.1
lmdb version: 1.0.0
psutil version: 5.7.2
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Test GPUs
###Code
max_epochs = 2
lr = 1e-3
device = torch.device("cuda:0")
net = UNet(
spatial_dims=2,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
).to(device)
def fake_loss(y_pred, y):
return (y_pred[0] + y).sum()
def fake_data_stream():
while True:
yield torch.rand((10, 1, 64, 64)), torch.rand((10, 1, 64, 64))
###Output
_____no_output_____
###Markdown
1 GPU
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [device])
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____
###Markdown
all GPUs
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, None)
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____
###Markdown
CPU
###Code
net = net.to(torch.device("cpu:0"))
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [])
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____
###Markdown
Multi GPU Test[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/acceleration/multi_gpu_test.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[ignite]"
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from monai.config import print_config
from monai.engines import create_multigpu_supervised_trainer
from monai.networks.nets import UNet
print_config()
###Output
MONAI version: 0.4.0
Numpy version: 1.19.1
Pytorch version: 1.7.0a0+7036e91
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0563a4467fa602feca92d91c7f47261868d171a1
Optional dependencies:
Pytorch Ignite version: 0.4.2
Nibabel version: 3.2.1
scikit-image version: 0.15.0
Pillow version: 8.0.1
Tensorboard version: 2.2.0
gdown version: 3.12.2
TorchVision version: 0.8.0a0
ITK version: 5.1.2
tqdm version: 4.54.1
lmdb version: 1.0.0
psutil version: 5.7.2
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Test GPUs
###Code
max_epochs = 2
lr = 1e-3
device = torch.device("cuda:0")
net = UNet(
spatial_dims=2,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
).to(device)
def fake_loss(y_pred, y):
return (y_pred[0] + y).sum()
def fake_data_stream():
while True:
yield torch.rand((10, 1, 64, 64)), torch.rand((10, 1, 64, 64))
###Output
_____no_output_____
###Markdown
1 GPU
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [device])
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____
###Markdown
all GPUs
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, None)
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____
###Markdown
CPU
###Code
net = net.to(torch.device("cpu:0"))
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [])
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____
###Markdown
Multi GPU Test[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/acceleration/multi_gpu_test.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q monai[ignite]
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from monai.config import print_config
from monai.engines import create_multigpu_supervised_trainer
from monai.networks.nets import UNet
print_config()
###Output
MONAI version: 0.4.0
Numpy version: 1.19.1
Pytorch version: 1.7.0a0+7036e91
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0563a4467fa602feca92d91c7f47261868d171a1
Optional dependencies:
Pytorch Ignite version: 0.4.2
Nibabel version: 3.2.1
scikit-image version: 0.15.0
Pillow version: 8.0.1
Tensorboard version: 2.2.0
gdown version: 3.12.2
TorchVision version: 0.8.0a0
ITK version: 5.1.2
tqdm version: 4.54.1
lmdb version: 1.0.0
psutil version: 5.7.2
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Test GPUs
###Code
max_epochs = 2
lr = 1e-3
device = torch.device("cuda:0")
net = UNet(
dimensions=2,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
).to(device)
def fake_loss(y_pred, y):
return (y_pred[0] + y).sum()
def fake_data_stream():
while True:
yield torch.rand((10, 1, 64, 64)), torch.rand((10, 1, 64, 64))
###Output
_____no_output_____
###Markdown
1 GPU
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [device])
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____
###Markdown
all GPUs
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, None)
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____
###Markdown
CPU
###Code
net = net.to(torch.device("cpu:0"))
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [])
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____
###Markdown
Multi GPU Test[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/acceleration/multi_gpu_test.ipynb) Setup environment
###Code
%pip install -q "monai[ignite]"
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from monai.config import print_config
from monai.engines import create_multigpu_supervised_trainer
from monai.networks.nets import UNet
print_config()
###Output
MONAI version: 0.3.0rc4
Python version: 3.6.10 |Anaconda, Inc.| (default, May 8 2020, 02:54:21) [GCC 7.3.0]
OS version: Linux (4.4.0-131-generic)
Numpy version: 1.19.1
Pytorch version: 1.7.0a0+8deb4fe
MONAI flags: HAS_EXT = False, USE_COMPILED = False
Optional dependencies:
Pytorch Ignite version: 0.4.2
Nibabel version: 3.1.1
scikit-image version: 0.15.0
Pillow version: 7.0.0
Tensorboard version: 2.2.0
gdown version: 3.12.2
TorchVision version: 0.8.0a0
ITK version: 5.1.0
tqdm version: 4.50.0
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Test GPUs
###Code
lr = 1e-3
device = torch.device("cuda:0")
net = UNet(
dimensions=2,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
).to(device)
def fake_loss(y_pred, y):
return (y_pred[0] + y).sum()
def fake_data_stream():
while True:
yield torch.rand((10, 1, 64, 64)), torch.rand((10, 1, 64, 64))
###Output
_____no_output_____
###Markdown
1 GPU
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [device])
trainer.run(fake_data_stream(), 2, 2)
###Output
_____no_output_____
###Markdown
all GPUs
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, None)
trainer.run(fake_data_stream(), 2, 2)
###Output
_____no_output_____
###Markdown
CPU
###Code
net = net.to(torch.device("cpu:0"))
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [])
trainer.run(fake_data_stream(), 2, 2)
###Output
_____no_output_____
###Markdown
Multi GPU Test[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/acceleration/multi_gpu_test.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[ignite]"
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from monai.config import print_config
from monai.engines import create_multigpu_supervised_trainer
from monai.networks.nets import UNet
print_config()
###Output
MONAI version: 0.4.0
Numpy version: 1.19.1
Pytorch version: 1.7.0a0+7036e91
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0563a4467fa602feca92d91c7f47261868d171a1
Optional dependencies:
Pytorch Ignite version: 0.4.2
Nibabel version: 3.2.1
scikit-image version: 0.15.0
Pillow version: 8.0.1
Tensorboard version: 2.2.0
gdown version: 3.12.2
TorchVision version: 0.8.0a0
ITK version: 5.1.2
tqdm version: 4.54.1
lmdb version: 1.0.0
psutil version: 5.7.2
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Test GPUs
###Code
max_epochs = 2
lr = 1e-3
device = torch.device("cuda:0")
net = UNet(
dimensions=2,
in_channels=1,
out_channels=1,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
).to(device)
def fake_loss(y_pred, y):
return (y_pred[0] + y).sum()
def fake_data_stream():
while True:
yield torch.rand((10, 1, 64, 64)), torch.rand((10, 1, 64, 64))
###Output
_____no_output_____
###Markdown
1 GPU
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [device])
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____
###Markdown
all GPUs
###Code
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, None)
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____
###Markdown
CPU
###Code
net = net.to(torch.device("cpu:0"))
opt = torch.optim.Adam(net.parameters(), lr)
trainer = create_multigpu_supervised_trainer(net, opt, fake_loss, [])
trainer.run(fake_data_stream(), max_epochs=max_epochs, epoch_length=2)
###Output
_____no_output_____ |
Pymaceuticals/Pymaceuticals.ipynb | ###Markdown
Observations and Insights
###Code
f = open("observations/obs.txt", "r")
print(f.read())
###Output
Observations and Insights
From just a general overview of the data analysis, we can see that out of all the treatments,
there were only two treatments that showed promising results: Ramicane and Capomulin. From the summary
statistics, those two treatments were the only ones with mean and median tumor volumes below the initial
tumor volumes. Perhaps a better representation are the box plots of final tumor volumes showing those
two treatments trending below the initial tumor volume of each mouse.
We can also look more closely at data for mice treated with the Capomulin regimen, and see that it is
indeed a promising treatment showing good results for tumor reduction. While the data analysis only shows
the line plot for one specific mouse treated with the Capomulin regimen, when changing the observed mouse
and briefly looking at the line plots for different mice treated with Capomulin, we can see that each mouse
shows a general downward trend of the tumor volume.
Lastly the linear regression analysis of Average Tumor Volume vs Mouse Weight for the Capomulin Regimen shows
a decently strong linear trend between the two variables; correlation coefficient is 0.842. Since the initial
tumor volumes were constant for each mouse, this perhaps suggests that a larger dose or some other variable change
in treatment is required for larger or heavier mice (eventually people) to effectively reduce tumor volume.
###Markdown
Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
#merge_df = pd.merge(bitcoin_df,dash_df,on="Date")
mouse_data = pd.merge(mouse_metadata,study_results, on = "Mouse ID")
mouse_data.head(10)
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
#groupby
regimen = mouse_data.groupby('Drug Regimen')
#constructing the summary table
#mean
regimen_df = pd.DataFrame(regimen['Tumor Volume (mm3)'].mean())
regimen_df = regimen_df.rename(columns = {'Tumor Volume (mm3)':'Mean Tumor Volume'})
#median
regimen_df['Median Tumor Volume'] = regimen['Tumor Volume (mm3)'].median()
#variance
regimen_df['Variance Tumor Volume'] = regimen['Tumor Volume (mm3)'].var()
#standard deviation
regimen_df['Standard Deviation Tumor Volume'] = regimen['Tumor Volume (mm3)'].std()
#SEM
regimen_df['SEM Volume'] = regimen['Tumor Volume (mm3)'].sem()
#sort them by least to greatest mean tumor volume
regimen_df = regimen_df.sort_values("Mean Tumor Volume")
regimen_df
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
#originally did this but can actually just to a value_counts. comes out sorted descending too
#data_points_df = pd.DataFrame(regimen['Drug Regimen'].count())
#data_points_df = data_points_df.sort_values("Data Points",ascending=False)
data_points_df = pd.DataFrame(mouse_data['Drug Regimen'].value_counts())
data_points_df = data_points_df.rename(columns = {'Drug Regimen':'Data Points'})
#plot!
myplot = data_points_df.plot(kind = 'bar',title='Drug Regimen Data Points', legend = False, rot = 80)
myplot.set_ylabel("Data Points")
#data_points_df
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
#reset index here so we can call the drug regimen column
data_points_df = data_points_df.reset_index()
#data_points_df
#set x and y axis from columns
x_axis= data_points_df['index']
y_axis = data_points_df['Data Points']
#construct plot
#I'll just change the color here
plt.bar(x_axis, y_axis, width = .6, color = 'gray')
#rotate the x axis labels
plt.xticks(rotation = 80)
plt.ylim(0,max(data_points_df['Data Points'])+10)
plt.xlabel("Drug Regimen")
plt.ylabel('Data Points')
plt.title("Drug Regimen Data Points")
plt.show()
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
#first we drop the mouse id duplicates, since all we care about is each unique mouse and its gender
mice_gender = mouse_data.drop_duplicates(subset='Mouse ID')
#df with gender and counts of each gender
mice_gender_df = pd.DataFrame(mice_gender.groupby('Sex').count()['Mouse ID'])
mice_gender_df =mice_gender_df.rename(columns = {'Mouse ID':'Counts'})
#construst plot
mice_gender_df.plot(kind ='pie' ,y = 'Counts', figsize = (5.5,5.5),autopct = "%.1f%%", colors = ['pink', 'blue'], legend = False,
title = "Mice Gender Distribution",startangle=140)
#mice_gender_df
# Generate a pie plot showing the distribution of female versus male mice using pyplot
#similar to before we need to reset the index
mice_gender_df = mice_gender_df.reset_index()
#use these columns for parameters for the pie chart
counts = mice_gender_df['Counts']
gender = mice_gender_df['Sex']
#found a bunch of colors here
#https://matplotlib.org/3.1.0/gallery/color/named_colors.html
colors = ["magenta","deepskyblue"]
#contruct plot
plt.figure(figsize = (5,5))
plt.pie(counts, colors=colors,labels=gender, autopct="%.1f%%", startangle=140)
plt.title("Mice Gender Distribution")
plt.axis("equal")
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, outliers and boxplots
###Code
#Calculate the final tumor volume of each mouse across four of the most promising treatment regimens.
#treatments given in directions
top_four_treatments=['Capomulin', 'Ramicane', 'Infubinol','Ceftamin']
#create df with the final tumor volumes for each mouse treated with the top four most promising treatements
#we can do a drop duplicate and keep the last row becuase that is the final timepoint for each mouse
tumor_df1 = mouse_data.drop_duplicates(subset = ['Mouse ID'], keep ='last')
tumor_df1 = tumor_df1[['Mouse ID','Drug Regimen','Tumor Volume (mm3)']]
tumor_df1 = tumor_df1.rename(columns = {'Tumor Volume (mm3)':'Final Tumor Volume'})
#filter the df down drug regimen column with only the top four treatments
tumor_df1 = tumor_df1.loc[(tumor_df1['Drug Regimen'].isin(top_four_treatments)),:]
tumor_df1=tumor_df1.reset_index(drop=True)
#tumor_df1
#Calculate the IQR and quantitatively determine if there are any potential outliers.
#Going to create a df with the IQR analysis, doesn't say we have to do this but seems like a good way
#data frame will include quartiles, iqr, bounds, and outlier values across the four treatments
#creating empty lists that will be used to create the IQR analysis df
#quartiles, outliers
lowerq=[]
median=[]
upperq=[]
outliercount=[]
outliervalue=[]
#use this loop to seperate each the four drug treatments and calculate quantiles.
#will append these values to the empty lists
for drug in top_four_treatments:
drug_df = tumor_df1.loc[(tumor_df1['Drug Regimen'] == drug),:]
quartiles = drug_df['Final Tumor Volume'].quantile([.25,.5,.75])
lowerq.append(round(quartiles[0.25],2))
median.append(round(quartiles[0.5],2))
upperq.append(round(quartiles[0.75],2))
#use list comprehensions to create lists with calculated iqr and bounds from quartiles
iqr=[upperq[i]-lowerq[i] for i in range(len(lowerq))]
lower_bound=[lowerq[i]-1.5*iqr[i] for i in range(len(lowerq))]
upper_bound=[upperq[i]+1.5*iqr[i] for i in range(len(upperq))]
#this loop will actually create a seperate df with only outliers for each seperate df
#then it will append a list of outliers for each seperate treatment to the list 'outliervalue'
for i in range(len(top_four_treatments)):
drug_df = tumor_df1.loc[(tumor_df1['Drug Regimen'] == top_four_treatments[i]),:]
#filter with bounds
outlier_df= drug_df.loc[((drug_df['Final Tumor Volume'] < lower_bound[i]) |
(drug_df['Final Tumor Volume'] > upper_bound[i])),:]
#since some of the regimen don't have any outliers, it creates an empty df, we can use the syntax of the if statement
if outlier_df.empty == True:
outliervalue.append('n/a')
else:
outliervalue.append(np.round(outlier_df['Final Tumor Volume'].values[:],2))
#all of our lists should be filled now and we can create our df!
iqr_df=pd.DataFrame({"Drug Regimen":top_four_treatments,
"Lower Quartile": lowerq,
"Median":median,
"Upper Quartile": upperq,
"IQR":iqr,
"Lower Bound":lower_bound,
"Upper Bound":upper_bound,
"Outlier Values":outliervalue})
#set the index to drug regimen
iqr_df=iqr_df.set_index('Drug Regimen')
iqr_df
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
final_volume_values=[]
#will append a list of final tumor volumes for each seperate treatment into our list
for drug in top_four_treatments:
drug_df = tumor_df1.loc[(tumor_df1['Drug Regimen'] == drug)]
final_volume_values.append(drug_df["Final Tumor Volume"].tolist())
#contruct
fig1, ax1 = plt.subplots(figsize=(8,5))
#add axis titles
ax1.set_title('Final Tumor Volume Box Plots\n')
ax1.set_xlabel('\nDrug Regimen')
ax1.set_ylabel('Final Tumor Volume (mm3)\n')
#filerprops to define the outlier markers
flierprops = dict(marker='o', markerfacecolor='r', markersize=7, markeredgecolor='black')
ax1.boxplot(final_volume_values,flierprops=flierprops)
#rename the x axis ticks with the actual drug regimens
plt.xticks(np.arange(1,len(top_four_treatments)+1), top_four_treatments)
plt.ylim(0,80)
#plt.figure(figsize=(10,6))
plt.show()
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
#find all the mice treated specifically with Capomulin
drug = 'Capomulin'
cap_mice = mouse_data.loc[(mouse_data['Drug Regimen'] == drug),['Mouse ID','Drug Regimen']]
cap_mice=cap_mice.drop_duplicates(subset = 'Mouse ID')
#cap_mice
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
#so this cell should create a nice formatted line plot for any mouse, just change mouse variable
mouse = 'm601'
#filter original df with mouse of our choice
mymouse= mouse_data.loc[(mouse_data['Mouse ID'] == mouse),['Mouse ID','Timepoint','Tumor Volume (mm3)','Weight (g)']]
#setting axis with columns from df
x_axis = mymouse['Timepoint']
y_axis = mymouse['Tumor Volume (mm3)']
#labels
plt.xlabel('Timepoint (days)')
plt.ylabel('Tumor Volume (mm3)')
plt.title(f'Time vs Tumor Volume, Mouse ID: {mouse}\n')
#setting lims and ticks using min and max values so it will format nicely for any mouse, not just specific to a single mouse
plt.xlim(0,x_axis.max())
plt.ylim((round(y_axis.min()-2),(round(y_axis.max())+2)))
plt.yticks(np.arange((round(y_axis.min())-4),(y_axis.max()+4),2))
plt.xticks(np.arange((x_axis.min()),(x_axis.max()+5),5))
#more formatting
plt.grid()
# :O so clip_on=False makes it to where the marker isnt cut off by the edge of the plot!!
plt.plot(x_axis,y_axis,marker="o", color="blue", linewidth=1,clip_on=False)
plt.show()
#shows the df with lineplot data for specified mouse
#mymouse
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
#filter the original df with only Capomulin treated rows
#drug variable was defined earlier as 'Capomulin'
drug_filter= mouse_data.loc[(mouse_data['Drug Regimen'] == drug),:]
#groupby individual mouse
group_mouse = drug_filter.groupby(['Mouse ID'])
#create dataframe with mean tumor volume and weight of each individual mice
group_mouse_df = pd.DataFrame(group_mouse['Tumor Volume (mm3)'].mean())
group_mouse_df['Weight (g)'] = group_mouse['Weight (g)'].mean()
group_mouse_df = group_mouse_df.rename(columns = {'Tumor Volume (mm3)':'Mean Tumor Volume'})
#group_mouse_df.head()
# Calculate the correlation coefficient and linear regression model for
#mouse weight and average tumor volume for the Capomulin regimen, columns for scatterplot
#y vs x, directions say to plot mouse weight vs avg tumor volume
#doesn't rly make sense becuase mouse weight is the independent variable so I'm going to plot tumor volume vs weight
avgtumor = group_mouse_df.iloc[:, 0]
weight = group_mouse_df.iloc[:, 1]
#(slope,int, r, p, std_err) from linregress
lin = st.linregress(weight,avgtumor)
regresslinex=np.arange(weight.min(),weight.max()+2,2)
line = regresslinex*lin[0] + lin[1]
#string presenting equation
eq = f"y = {round(lin[0],2)}x+{round(lin[1],2)}"
#scatterplot formatting
plt.figure(figsize=(10,7))
plt.xlabel('Mouse Weight (g)')
plt.ylabel('Average Tumor Volume (mm3)')
plt.ylim((round(avgtumor.min()-1.5),(round(avgtumor.max())+1.5)))
plt.xlim((round(weight.min()-1.5),(round(weight.max())+1.5)))
plt.title(f'Average Tumor Volume vs Mouse Weight for {drug} Regimen\n')
plt.scatter(y=avgtumor,x=weight,color='slategrey')
#plotting linear regression line with equation annotations
plt.plot(regresslinex,line,"b--")
variables = 'x = Mouse Weight (g) \ny = Average Tumor Volume (mm3)'
plt.annotate(eq,(weight.max()-4,avgtumor.min()+1))
plt.annotate(variables,(weight.max()-4,avgtumor.min()))
plt.show()
#printing out the linear regression results
print(f'Average Tumor Volume vs Mouse Weight for {drug} Regimen\nLinear Regression Model:')
print(f'\n{eq}\nx = Mouse Weight (g)\ny = Average Tumor Volume (mm3)')
print(f'Correlation Coefficient(R) = {round(lin[2],3)}')
#ignore cell, was testing something out
#lab=['a','a','a','a','a','a','a','a','a','a',
# 'b','b','b','b','b','b','b','b','b','b',
# 'c','c','c','c','c','c','c','c','c','c',]
#numbs=[4,4,4,4,4,4,4,4,8,8,
# 4,4,4,4,4,4,4,9,10,9,
# 4,4,4,4,4,6,8,6,6,8]
#uns=['a','b','c']
#random = pd.DataFrame({'label':lab,'numbs':numbs})
#outliercountss=[]
#outlierdfss=[]
#outlieractualvalues=[]
#for i in range(len(uns)):
# smort = random.loc[(random['label'] == uns[i]),:]
# out= smort.loc[(smort['numbs'] > 5),:]
# outliercountss.append(len(out))
# outlierdfss.append(out)
#for i in range(len(outlierdfss)):
# outlieractualvalues.append(outlierdfss[i]['numbs'].values[:])
#outlieractualvalues
#
#summ=pd.DataFrame({'labels':uns,
# 'outliers':outlieractualvalues})
#summ
#whoops, was trying to set up a df to show which drug had the largest tumor change based on the average
#tumor change across each drug regimen, but we were already given the treatments we need to work with
#thought I would do it anyways, created a df with the change in tumor volume, grouped by the drug and then have the average
#tumor volume change, spits out the top four.
#curiously, it doesn't match up with the four treatments we were given
#initial_vol = mouse_data.drop_duplicates(subset = ['Mouse ID'], keep ='first')
#initial_vol = initial_vol[['Mouse ID','Drug Regimen','Tumor Volume (mm3)']]
#initial_vol = initial_vol.rename(columns = {'Tumor Volume (mm3)':'Initial Tumor Volume (mm3)'})
#final_vol = mouse_data.drop_duplicates(subset = ['Mouse ID'], keep ='last')
#final_vol = final_vol[['Mouse ID','Tumor Volume (mm3)']]
#final_vol = final_vol.rename(columns = {'Tumor Volume (mm3)':'Final Tumor Volume (mm3)'})
#change_df = pd.merge(initial_vol,final_vol, on = "Mouse ID")
#change_df['Change in Tumor Volume (mm3)'] = change_df['Initial Tumor Volume (mm3)']-change_df['Final Tumor Volume (mm3)']
#top = change_df.groupby(['Drug Regimen'])
#top_df=pd.DataFrame(top['Change in Tumor Volume (mm3)'].mean())
#top_df=top_df.sort_values("Change in Tumor Volume (mm3)",ascending=False)
#top_df = top_df.reset_index()
#top_df
#top_treatments = top_df[:4]
#top_treatments
#da_best_treatments = top_treatments['Drug Regimen'].values
#da_best_treatments
#print('The top four most promising treatments:')
#for x in da_best_treatments:
# print(x)
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
# Display the data table for preview
combined_df = pd.merge(mouse_metadata, study_results, how="inner", on="Mouse ID")
combined_df
# Checking the number of mice.
mouse_count = combined_df["Mouse ID"].count()
mouse_count
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_rows = combined_df[combined_df.duplicated(['Mouse ID', 'Timepoint'])]
duplicate_rows
# Optional: Get all the data for the duplicate mouse ID.
all_duplicate_rows = combined_df[combined_df.duplicated(['Mouse ID',])]
all_duplicate_rows
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = combined_df.drop_duplicates("Mouse ID")
clean_df
# Checking the number of mice in the clean DataFrame.
mousecount=len(clean_df["Mouse ID"].unique())
mousecount
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
mean = combined_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean()
median = combined_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].median()
variance = combined_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].var()
standard_dv = combined_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].std()
sem = combined_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem()
summary_df = pd.DataFrame({"Mean": mean, "Median": median, "Variance": variance, "Standard Deviation": standard_dv, "SEM": sem})
summary_df
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Using the aggregation method, produce the same summary statistics in a single line
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
drug_data = pd.DataFrame(combined_df.groupby(["Drug Regimen"]).count()).reset_index()
drugs_df = drug_data[["Drug Regimen", "Mouse ID"]]
drugs_df = drugs_df.set_index("Drug Regimen")
drugs_df.plot(kind="bar", figsize=(10,3))
plt.title("Drug Treatment Count")
plt.show()
plt.tight_layout()
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_df = pd.DataFrame(combined_df.groupby(["Sex"]).count()).reset_index()
gender_df = gender_df[["Sex","Mouse ID"]]
plt.figure(figsize=(12,6))
ax1 = plt.subplot(121, aspect="equal")
gender_df.plot(kind="pie", y = "Mouse ID", ax=ax1, autopct='%1.1f%%',
startangle=190, shadow=True, labels=gender_df["Sex"], legend = False, fontsize=14)
plt.title("Male & Female Mice Percentage")
plt.xlabel("")
plt.ylabel("")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
capomulin_df = combined_df.loc[combined_df["Drug Regimen"] == "Capomulin"]
capomulin_df = capomulin_df.reset_index()
capomulin_df.head()
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
#matplotlib inline
## Dependencies and Setup
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as st
from scipy.stats import linregress
## Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
## Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
## Preview of .csv
print(study_results.head())
print()
print(mouse_metadata.head())
## Combine the data into a single dataset
MergedMiceData_DF = pd.merge(mouse_metadata, study_results, how="outer", on="Mouse ID")
## Display the data table for preview
MergedMiceData_DF.head(10)
## Get info summary on database thus far
print(MergedMiceData_DF.describe())
MergedMiceData_DF.dtypes
## Check info on Mouse ID's (ie counts, values, unique ID's)
#MergedMiceData_DF['Mouse ID'].unique()
UniqMouseIDs = MergedMiceData_DF['Mouse ID'].nunique()
print(f'The number of unique Mouse IDs is: {UniqMouseIDs}.''\n')
## Checking the number of mice.
print(MergedMiceData_DF.count())
MergedMiceData_DF['Mouse ID'].value_counts()
CleanMiceData_DF = MergedMiceData_DF
CleanMiceData_DF.head(5)
CleanMiceData_DF.count()
## Optional: Get all the data for the duplicate mouse ID.
## Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
dupMice = MergedMiceData_DF[MergedMiceData_DF.duplicated(["Mouse ID", "Timepoint"], keep=False)]
dupNumber = dupMice[['Mouse ID', 'Timepoint']].count()
print(f'The number of duplicates for data is: \n{dupNumber} ')
print()
## Showing the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
print("Here are the row numbers, Mouse IDs, and Timpepoints values that are repeated:")
print(dupMice[['Mouse ID', 'Timepoint']])
print()
## Creating cleaned merged dataset by dropping duplicate mice by their IDs from above DF.
## df.drop_duplicates([column(s)], keep, inplace) - Returns DF w/dup rows from columns removed
CleanMiceData_DF = CleanMiceData_DF.drop_duplicates(["Mouse ID", "Timepoint"], keep=False, inplace=False)
## OR could also have used:
#CleanMiceData_DF = MergedMiceData_DF.drop_duplicates(subset=["Mouse ID","Timepoint"])
# Checking the number of mice in the clean DataFrame.
print(CleanMiceData_DF.count())
UniqMouseIDs = CleanMiceData_DF['Mouse ID'].nunique()
print(f'The number of unique Mouse IDs is: {UniqMouseIDs}.''\n')
###Output
Mouse ID 1883
Drug Regimen 1883
Sex 1883
Age_months 1883
Weight (g) 1883
Timepoint 1883
Tumor Volume (mm3) 1883
Metastatic Sites 1883
dtype: int64
The number of unique Mouse IDs is: 249.
###Markdown
Summary Statistics
###Code
# Preview of dataframe for code reference
CleanMiceData_DF.head(5)
## Use groupby and summary statistical methods to calculate the following properties of each drug regimen: # mean, median, variance, standard deviation, and SEM of the tumor volume.
DrugMean = CleanMiceData_DF.groupby("Drug Regimen")['Tumor Volume (mm3)'].mean()
DrugMedian = CleanMiceData_DF.groupby("Drug Regimen")['Tumor Volume (mm3)'].median()
DrugVari = CleanMiceData_DF.groupby("Drug Regimen")['Tumor Volume (mm3)'].var()
DrugStdDev = CleanMiceData_DF.groupby("Drug Regimen")['Tumor Volume (mm3)'].std()
DrugSEM = CleanMiceData_DF.groupby("Drug Regimen")['Tumor Volume (mm3)'].sem()
#print(DrugMean)
#print(DrugMedian)
#print(DrugVari)
#print(DrugStdDev)
#print(DrugSEM)
## Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen. Assemble the resulting series into a single summary dataframe.
SummaryDrugReg_DF = pd.DataFrame({
"Mean": DrugMean, "Median": DrugMedian, "Variance": DrugVari,
"Std Deviation": DrugStdDev, "SEM": DrugSEM})
#SummaryDrugReg_DF
SummaryDrugReg_DF.style.set_caption('TUMOR VOLUME VALUES')
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen. Using the aggregation method, produce the same summary statistics in a single line
# = df.groupby(['Drug GrpBy Col']).agg({('Tumor Vol Col'):["mean", "median", "var", "std", "sem"]})
AggSummary = CleanMiceData_DF.groupby(["Drug Regimen"]).agg({
"Tumor Volume (mm3)":["mean", "median", "var", "std", "sem"]})
AggSummary
## In looking at the summary statistics table of mean, median, variance, standard deviation, and SEM of the Tumor Volumes for each drug regimen, it looks like the most effective drugs are: Capomulin & Ramicane. They are nearly identical in every metric, to each other, but they best all other drugs by a margin.
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
## Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas.
## Groupby on Drug Regimen
DrugRegimen = CleanMiceData_DF.groupby(["Drug Regimen"])
#DrugRegimen = CleanMiceData_DF.groupby("Drug Regimen")['Mouse ID'].count()
#DrugRegimen.head(10)
## Number of mice used for each drug ( for Y axis)
UniqMouseIDs = DrugRegimen['Mouse ID'].count()
## Ordering greatest to lowest
UniqMouseIDs.sort_values(ascending=False, inplace=True)
## Panda bar plot with index/Regimen as x-axis, and 'Mouse ID' count as y-axis)
UniqMouseIDs.plot(kind='bar', y ='Mouse ID', color='b', align="center")
## Label for y-axis, setting a tight layout, and saving fig to 'data' folder
plt.grid()
plt.ylabel("Number of Mice Tested")
plt.title("Measurement Totals")
plt.tight_layout()
plt.savefig("data/Bar_Panda_MiceTestedRegimen.jpg", dpi=200)
plt.show()
## Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot.
## Using some of the same variables (DrugRegimen & UniqMouseIDs)
## X-axis based on length of Regimens
X_AxisDrugs = np.arange(len(DrugRegimen))
plt.bar(X_AxisDrugs, UniqMouseIDs, color='r', alpha=.5, align="center")
plt.xticks(X_AxisDrugs, labels=UniqMouseIDs.index, rotation="90") #labels=DrugRegimen
plt.grid()
plt.title("Measurement Totals")
plt.ylabel("Number of Mice Tested")
plt.tight_layout()
plt.savefig("data/Bar_Plyplot_MiceTestedRegimen.jpg", dpi=200)
plt.show()
## Generate a pie plot showing the distribution of female versus male mice using pandas
## Generate data on mice results based on sex of mice
MiceGender = CleanMiceData_DF.groupby(["Sex"])
GenderResults = MiceGender['Drug Regimen'].count()
GenderResults
GenderResults.plot(kind='pie', labels=GenderResults.index, colors=("pink", "cyan"),
autopct="%1.1f%%", shadow=True, startangle=-180)
## Label for y-axis, setting a tight layout, and saving fig to 'data' folder
plt.ylabel("% of Mice Genders Tested")
plt.title("Distribution of Male/Female Mice in Study")
plt.tight_layout()
plt.savefig("data/Pie_Panda_MiceGenders.jpg", dpi=200)
plt.show()
## Generate a pie plot showing the distribution of female versus male mice using pyplot
MiceGender = CleanMiceData_DF.groupby(["Sex"])
GenderResults = MiceGender['Drug Regimen'].count()
plt.pie(GenderResults, explode = (0.1,0.0), labels=GenderResults.index, colors=("magenta", "aqua"),
autopct="%1.1f%%", shadow=True, startangle=-180)
plt.ylabel("% of Mice Genders Tested")
plt.title("Distribution of Male/Female Mice in Study")
plt.tight_layout()
plt.savefig("data/Pie_Plyplot_MiceGenders.jpg", dpi=200)
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
## Calculate the final tumor volume of each mouse across four of the treatment regimens:
## Capomulin, Ramicane, Infubinol, and Ceftamin
## Start by getting the last (greatest) timepoint for each mouse
MiceTumorStudy_DF = CleanMiceData_DF
MiceTumorStudy_DF
MiceTumorStudy_DF = MiceTumorStudy_DF.groupby('Mouse ID').max().reset_index()
MiceTumorStudy_DF
## Merge this group df with the original dataframe to get the tumor volume at the last timepoint
MiceTumorStudy_DF = MiceTumorStudy_DF[['Mouse ID','Timepoint']].merge(CleanMiceData_DF, on=['Mouse ID','Timepoint'], how="left")
MiceTumorStudy_DF.head(5)
## Create dataframe for all data of just 4 drug regimens to review:
## "Capomulin", "Ramicane", "Infubinol", "Ceftamin"
All4MiceData = MiceTumorStudy_DF.loc[(MiceTumorStudy_DF['Drug Regimen'] == 'Capomulin') |
(MiceTumorStudy_DF['Drug Regimen'] == 'Ramicane') |
(MiceTumorStudy_DF['Drug Regimen'] == 'Infubinol') |
(MiceTumorStudy_DF['Drug Regimen'] == 'Ceftamin'),:]
All4MiceData
## Create a df of just the 4 Drug Regimen and their Tumor Vol column data
DrugTumorData = All4MiceData[['Drug Regimen','Tumor Volume (mm3)']]
All4MiceData[['Mouse ID', 'Drug Regimen','Tumor Volume (mm3)']].head(10)
#All4MiceData
## Put treatments into a list for a 'for loop' (and later for plot labels)
Treatments = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
## Create empty list to fill with tumor vol data (for plotting)
TumorVolumes = []
## Created an extra list to hold outlier boolean values/check
OutlierCheck = []
## Counter for loop to iterate through Treatment list
CurrentDrug = 0
X = ''
## Calculate the IQR and quantitatively determine if there are any potential outliers.
## Locate the rows which contain mice on each drug and get the tumor volumes, add subset
## Determine outliers using upper and lower bounds
## If the data is in a dataframe, we use pandas to give quartile calculations
for treatment in Treatments:
## Passing through list, filter data by the individual drugs and their Tumor Volumes column data
TreatmentData = DrugTumorData[DrugTumorData['Drug Regimen'] == treatment]['Tumor Volume (mm3)']
## Perform calcuations for quartile, iqr, etc
quartiles = TreatmentData.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
## Append the Tumor Volume columns' data (for each drug) to list for future plotting
TumorVolumes.append(TreatmentData)
## Checking for outliers above/below bound thresholds
outliers = (TreatmentData < lower_bound) | (TreatmentData > upper_bound)
## Rename/name new DF to give it a column title
outliers = outliers.to_frame(f"{Treatments[CurrentDrug]} outliers check:")
## Any outlier that returns a True boolean is tagged and added to Outlier check
## This will be used to locate the index row of the outlier and it's values to print from main DF
FoundOutlier = outliers.index[outliers[f"{Treatments[CurrentDrug]} outliers check:"] == True].tolist()
OutlierCheck.append(outliers)
## If no outliers are found, return just the lower/upper bound data,
# CurrentDrug counter will increase by 1 to iterate through treatment list to next drug
if not FoundOutlier:
print(f"Drug Regimen: '{treatment}' has a Lower Bound of: {lower_bound} and Upper Bound of: {upper_bound}. \n There are no potential outliers within the '{treatment}' dataset. \n")
CurrentDrug += 1
## If outlier is found, we will get the index number to cross reference in another dataframe to return values.
## Will then print lower/upper bound data as well as an ID and outlier value found. Iterate up through Treatment list.
## There is an outlier with the drug "Infubinol" of the 4 drug regimens.
else:
## Changing the returned list (and number/index) into an integer that can be concatinated as an index to find 'Mouse ID'
X = int("".join(str(i) for i in FoundOutlier))
MouseId = All4MiceData['Mouse ID'][X]
print(f"Drug Regimen: '{treatment}' has a Lower Bound of: {lower_bound} and Upper Bound of: {upper_bound}. \n Mouse ID: '{MouseId}' is a potential outlier for '{treatment}' with a tumor volume of: {TreatmentData[X]}. \n")
CurrentDrug += 1
## Print commands resource to review findings and test data
#print(TreatmentData[FoundOutlier])
#print(FoundOutlier)
#print(lower_bound)
#print(upper_bound)
#print()
#HighOutlier = (TreatmentData > upper_bound)
#LowOutlier = (TreatmentData < lower_bound)
#print(HighOutlier)
#print(LowOutlier)
#print(outliers)
#print()
#print(X)
#MouseId
#print(TreatmentData)
#print(OutlierCheck)
#print(TumorVolumes)
#All4MiceData[['Mouse ID', 'Drug Regimen','Tumor Volume (mm3)']].head(35)
## Generate a box plot of the final tumor volume of each mouse across four regimens of interest
Treatments = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"]
fig1, ax1 = plt.subplots(figsize = (6, 6))
#fig1, ax1 = plt.subplots()
ax1.set_title("Final Tumor Volumes")
ax1.set_xlabel("Drug Regimens")
ax1.set_ylabel("Tumor Volume (mm3)")
ax1.set_xticklabels(Treatments)
ax1.boxplot(TumorVolumes)
plt.grid()
plt.tight_layout()
plt.savefig("data/BoxPlot_FinalTumorVolumes.jpg", dpi=200)
plt.show()
###Output
<ipython-input-34-f9d83e3540f0>:9: UserWarning: FixedFormatter should only be used together with FixedLocator
ax1.set_xticklabels(Treatments)
###Markdown
Line and Scatter Plots
###Code
#CapomulinData = DrugTumorData[DrugTumorData['Drug Regimen'] == treatment]['Tumor Volume (mm3)']
#All4MiceData[['Mouse ID', 'Drug Regimen','Tumor Volume (mm3)']]
#CapomulinData = All4MiceData.loc[All4MiceData['Drug Regimen'] == 'Capomulin'),:]
#CapomulinData = CleanMiceData_DF.loc[CleanMiceData_DF['Drug Regimen'] == 'Capomulin',:]
## Using original cleaned/unedited dataframe, filter by 'Capomulin'
CapomulinData = CleanMiceData_DF.loc[CleanMiceData_DF['Drug Regimen'] == 'Capomulin',:]
CapomulinData
## Choose a mouse to eventually use as an example of Tumor Volumes over Timepoints
MouseX401 = CleanMiceData_DF.loc[CleanMiceData_DF['Mouse ID'] == 'x401',:]
## Create X/Y-axis for charts to represent Timepoint column data and Tumor Vol data for Mouse X401
Timepoint_xaxis = MouseX401["Timepoint"]
TumorVol_yaxis = MouseX401["Tumor Volume (mm3)"]
## Plot Line Chart
fig1, ax1 = plt.subplots(figsize = (8, 8))
plt.xlabel("Timepoint")
plt.ylabel("Tumor Volume (mm3)")
plt.xlim([-1, 50])
plt.ylim([25, 50])
plt.plot(Timepoint_xaxis, TumorVol_yaxis, color = "g", label=f"Mouse x401")
plt.scatter(Timepoint_xaxis, TumorVol_yaxis, color = "y", alpha=1, edgecolor="orange")
## Added scatter for fun :) and practice for next chart
plt.title("Capomulin Mouse X401 Tumor Volume")
plt.grid()
plt.savefig("data/Line_Plyplot_MouseX401Tumor.jpg", dpi=200)
plt.show()
## Checking overall data on subject mouse
MouseX401
## In looking at female mouse subject "x401" line plot of Timepoint vs Tumor Volume, as well as the subject's overall data snapshot while on this drug regimen, it looks like the drug 'Capomulin' is very effective in treating her tumor volume over time. The decrease in tumor mass is steady, even as the subject's weight stays the same. Promising results for this drug regarding this subject.
# Generate a scatter plot of mouse weight vs average/mean tumor volume for the Capomulin regimen
CapolmulinMean = CapomulinData.groupby(["Mouse ID"]).mean()
fig1, ax1 = plt.subplots(figsize = (8, 8))
plt.scatter(CapolmulinMean["Weight (g)"], CapolmulinMean["Tumor Volume (mm3)"])
plt.title("Mouse Weight vs Average Tumor Volume for Capomulin")
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.xlim([14, 27])
plt.ylim([34, 47])
plt.grid()
plt.savefig("data/Scatter_Plyplot_WeightVsTumor.jpg", dpi=200)
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
## Calculate the correlation coefficient for mouse weight(x-axis) vs tumor volume(y-axis) from scatter above
## CapolmulinMean["Weight (g)"] , CapolmulinMean["Tumor Volume (mm3)"]
## correlation = round(st.pearsonr(x-ax,y-ax)[0],2)
correlation = round(st.pearsonr(CapolmulinMean["Weight (g)"], CapolmulinMean["Tumor Volume (mm3)"])[0],2)
print(f"The correlation coefficient between mouse weight and tumor volume is {correlation}.")
# Calculate the linear regression model for mouse weight and average tumor volume for the Capomulin regimen from scatter plot
CapolmulinMean = CapomulinData.groupby(["Mouse ID"]).mean()
fig1, ax1 = plt.subplots(figsize = (6, 6))
## Linear Regression formulation
x_values = CapolmulinMean["Weight (g)"]
y_values = CapolmulinMean["Tumor Volume (mm3)"]
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values, y_values)
plt.plot(x_values, regress_values, "r-")
plt.annotate(line_eq, (20,1), fontsize=15, color="red")
plt.title("Mouse Weight vs Average Tumor Volume for Capomulin")
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.xlim([14, 27])
plt.ylim([34, 47])
plt.grid()
print(f"The r-squared is: {rvalue**2}")
plt.savefig("data/LineRegress_Plyplot_WeightVsTumor.jpg", dpi=200)
plt.show()
## There is a very positive linear relationship/regression line between the two variables/axis. As the weight of a mouse increases, the average tumor volume also increase for mice on this drug regimen of 'Capomulin'. Note: Correlation looks to be strong but correlation is not the same as causation.
### _______________________ Ithamar Francois _______________________ ###
###Output
_____no_output_____
###Markdown
Observations and Insights Analysis:1) From the boxplot it can be seen that the Capomulin and Ceftamin drugs had the lowest tumor volume median. From the boxplots, it appears that Capomulin and Ceftamin are the most promising at reducing the tumor size, though confidence intervals for all the drugs would need to be calculated to see if further investigation into which drug is most effective is necessary.2) Infubinol has a lower bound outlier. This is the only drug of the top four drugs plotted that concains an outlier. Further testing could be done to discover exactly which mouse represents this particular outlier and what is different about this mouse that makes it an outlier. 3) The r-squared value of the linear regression model is 0.708 or 70.8% and the r-value is 0.84 or 84%. The regression model derived from the data appears to capture the majority of the variance, which is in the difference in drug regimens.
###Code
%matplotlib inline
# Dependencies and Setup
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy.stats as st
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset and display the data table for preview
combined_mousestudy_df = pd.merge(mouse_metadata, study_results,
how='left', on='Mouse ID')
# sorting by mouse id and remove values that are duplicated
combined_mousestudy_df.sort_values(["Mouse ID", "Timepoint"], inplace = True)
combined_mousestudy_df.loc[combined_mousestudy_df.duplicated(subset = ["Mouse ID", "Timepoint"]), "Mouse ID"]
combined_mousestudy_df = combined_mousestudy_df.loc[combined_mousestudy_df["Mouse ID"] != "g989", :]
combined_mousestudy_df.head()
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Summary statistics table
# mean, median, variance, standard deviation, and SEM (standard error of the mean) of the tumor volume for each regimen
tumor_volume_avg = combined_mousestudy_df.groupby(["Drug Regimen"]).mean()["Tumor Volume (mm3)"].rename("Mean")
tumor_volume_median = combined_mousestudy_df.groupby(["Drug Regimen"]).median()["Tumor Volume (mm3)"].rename("Median")
tumor_volume_variance = combined_mousestudy_df.groupby(["Drug Regimen"]).var()["Tumor Volume (mm3)"].rename("Variance")
tumor_volume_standev = combined_mousestudy_df.groupby(["Drug Regimen"]).std()["Tumor Volume (mm3)"].rename("Standard Deviation")
tumor_volume_sem = combined_mousestudy_df.groupby(["Drug Regimen"]).sem()["Tumor Volume (mm3)"].rename("Standard Error of Mean")
Summary_tumor_volume = pd.DataFrame({'Avg Tumor Vol': tumor_volume_avg, 'Mediam Tumor Vol': tumor_volume_median,
'Tumor Vol Variance': tumor_volume_variance, 'Tumor Vol Standard Deviation': tumor_volume_standev,
'Tumor Vol Standard Error of Mean': tumor_volume_sem})
Summary_tumor_volume.head()
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Grouping mice by drug regimen
mice_by_treatment = combined_mousestudy_df.groupby("Drug Regimen")
# Count how many number of mice in each regimen
count_mice_by_treatment = mice_by_treatment["Drug Regimen"].count()
# Bar plot displaying number of mice per drug regimen by pandas method
count_chart = count_mice_by_treatment.plot(kind='bar', color="b")
# Set the xlabel and ylabel
plt.title("Mice Per Drug Regimen")
count_chart.set_xlabel("Drug Regimen")
count_chart.set_ylabel("Number of Mice")
# Get the number of mice per treatment to make an array for pyplot method
count_mice_by_treatment
# Bar plot displaying number of mice per drug regimen by pyplot method
#Create the array with the numbers of mice per treatment
treatment_numbers = [230, 178, 178, 188, 186, 181, 148, 228, 181, 182]
#Set the x_axis to be the amount of mice per treatment
x_axis = np.arange(len(count_mice_by_treatment))
plt.bar(x_axis, treatment_numbers, color='b', alpha=0.75, align='center')
# create the tick locations
tick_locations = [i for i in x_axis]
plt.xticks(tick_locations, ['Capomulin', 'Ceftamin', 'Infubinol',
'Ketapril', 'Naftisol', 'Placebo', 'Propriva',
'Ramicane', 'Stelasyn', 'Zoniferol'], rotation='vertical')
#set the limits
plt.xlim(-0.75, len(x_axis)-0.25)
plt.ylim(0, max(treatment_numbers)+10)
#set the labels
plt.title("Mice Per Drug Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Mice")
# Grouping mice by drug regimen
mice_by_sex = combined_mousestudy_df.groupby("Sex")
# Count how many number of mice in each regimen
count_mice_by_sex = mice_by_sex["Sex"].count()
# pie chart displaying the distribution of female versus male mice using pandas
colors = ["lightpink", "lightskyblue"]
explode = (0, 0.05)
mice_by_sex_chart = count_mice_by_sex.plot(kind='pie', autopct="%1.1f%%", shadow=True, startangle=0, colors=colors , explode=explode)
plt.title("Mice by Sex")
plt.axis("equal")
plt.ylabel(None)
# pie chart displaying the distribution of female versus male mice using pyplot
sex = combined_mousestudy_df["Sex"].unique()
percent_sex = count_mice_by_sex
colors = ["lightpink", "lightskyblue"]
explode = (0, 0.05)
plt.pie(percent_sex, explode=explode, labels=sex, colors=colors,
autopct="%1.1f%%", shadow=True, startangle=0)
plt.title("Mice by Sex")
plt.axis("equal")
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Grab the top 4 regimens (Capomulin, Ramicane, Infubinol, and Ceftamin)
combined_df = combined_mousestudy_df
drug_list =['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin']
top_reg = combined_df[combined_df["Drug Regimen"].isin(drug_list)]
top_reg = top_reg.sort_values(["Timepoint"])
top_reg
top_reg_dat = top_reg[["Drug Regimen", "Mouse ID", "Timepoint", "Tumor Volume (mm3)"]]
top_reg_dat
#Group the data by both Drug Regimen and Mouse ID and get the final tumor measurement
top_reg_sort = top_reg_dat.groupby(['Drug Regimen', 'Mouse ID']).last()['Tumor Volume (mm3)']
top_reg_sort.head()
# put above data into dataframe for future manipulation
top_reg_df = top_reg_sort.to_frame()
top_reg_df
#reset the index for final 4 drugs of interest
tumor_drugs_df = top_reg_df.reset_index()
# Create a tumor drug regimen list with tumor volume for plotting
final_drug_list = tumor_drugs_df.groupby('Drug Regimen')['Tumor Volume (mm3)'].apply(list)
final_drug_list
#Turn the list into a dataframe for manipulation
final_drugs_df = pd.DataFrame(final_drug_list)
# Generate a box plot of the final tumor volume of each mouse across the top four drug regimens
tumor_volumes = [volume for volume in final_drugs_df['Tumor Volume (mm3)']]
plt.boxplot(tumor_volumes, labels=drug_list)
plt.ylim(20, 80)
plt.title("Final Tumor Volume per Mouse Across Top 4 Drug Regimens")
plt.ylabel('Tumor Volume (mm3)')
plt.xlabel('Drug Regimen')
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Randomly selected a mouse and grab data for that particular mouse, the mouse chosen: m601
time_point_vs_tumor_vol = combined_df[combined_df["Mouse ID"].isin(["m601"])]
time_point_vs_tumor_vol
#Create new data frame for mouse ID m601 with ID, timepoint and tumor volume
time_point_vs_tumor_vol_df = time_point_vs_tumor_vol[["Mouse ID", "Timepoint", "Tumor Volume (mm3)"]]
time_point_vs_tumor_vol_df
# Establish line plot data frame, resetting the index
line_plot = time_point_vs_tumor_vol_df.reset_index()
line_plot
# Remove index for graphing of the line plot with final dataframe
final_line_plot = line_plot[["Mouse ID", "Timepoint", "Tumor Volume (mm3)"]]
final_line_plot
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
color = ["green", "darkorange"]
lines_df = final_line_plot.plot.line(color = color)
plt.title("Timepoint vs Tumor Volume")
plt.xlabel('Timepoint')
plt.ylabel('Tumor Volume (mm3)')
# Create a dataframe to retrieve data for the specific drug regimen 'Capomulin' using pandas method
Capomulin_df = combined_df[combined_df["Drug Regimen"].isin(["Capomulin"])]
Capomulin_df
Capomulin_Scatter_plot = Capomulin_df.reset_index()
Capomulin_Scatter_plot
Capomulin_weight = Capomulin_Scatter_plot.groupby(['Mouse ID', 'Weight (g)', 'Tumor Volume (mm3)']).mean()
Capomulin_weight
Capomulin_weight_volume_plot = pd.DataFrame(Capomulin_weight).reset_index()
Capomulin_weight_volume_plot
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
Capomulin_scatter_final = Capomulin_weight_volume_plot.groupby('Mouse ID').mean()
Capomulin_scatter_final
Capomulin_scatter_final.plot(kind='scatter', x='Weight (g)',
y='Tumor Volume (mm3)', grid = True, figsize= (8,8), c='red', s=100)
plt.title("Mouse Weight vs Average Tumor Volume for Capomulin Drug Regimen")
plt.xlabel('Mouse Weight (g)')
plt.ylabel('Average Tumor Volume (mm3)')
# Create a dataframe to retrieve data for the specific drug regimen 'Capomulin' using pyplot method
# Get the data for the specific drug and calculate the average
Capomulin_data = combined_mousestudy_df.loc[combined_mousestudy_df["Drug Regimen"]=="Capomulin"]
average = Capomulin_data.groupby(['Mouse ID']).mean()
# plot the data
plt.scatter(average["Weight (g)"], average["Tumor Volume (mm3)"], c='red', s=80)
plt.title("Mouse Weight vs Average Tumor Volume for Capomulin Drug Regimen")
plt.xlabel('Mouse Weight (g)')
plt.ylabel('Average Tumor Volume (mm3)')
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Add the linear regression equation and line to plot
x_values = Capomulin_scatter_final["Weight (g)"]
y_values = Capomulin_scatter_final["Tumor Volume (mm3)"]
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
plt.scatter(x_values,y_values, color = 'red')
plt.plot(x_values,regress_values,"r-",color = "blue", label = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2)))
plt.legend(loc="upper left")
plt.xlabel('Mouse Weight (g)')
plt.ylabel('Average Tumor Volume (mm3)')
plt.title("Mouse Weight Vs. Average Tumor Volume")
print(f"The r-value is: {rvalue}")
print(f"The r-squared is: {rvalue**2}")
plt.show()
###Output
The r-value is: 0.8419363424694726
The r-squared is: 0.708856804770873
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
regimen_mean = clean_merged_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].mean()
regimen_median = clean_merged_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].median()
regimen_var = clean_merged_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].var()
regimen_std = clean_merged_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].std()
regimen_sem = clean_merged_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].sem()
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
regimen_summary = pd.DataFrame({"Mean (Tumor Volume (mm3))": regimen_mean,
"Median (Tumor Volume (mm3))": regimen_median,
"Variance (Tumor Volume (mm3))": regimen_var,
"Standard Deviation (Tumor Volume (mm3))": regimen_std,
"SEM (Tumor Volume (mm3))": regimen_sem})
regimen_summary
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of measurements taken on each drug
# regimen using pandas.
plot_data_1 = clean_merged_df["Drug Regimen"].unique()
plot_data_2 = clean_merged_df["Drug Regimen"].value_counts()
plt.bar(plot_data_1, plot_data_2)
plt.xticks(rotation=45)
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Measurements Taken")
plt.title("Total Number of Measurements Taken on Each Drug Regimen")
plt.show()
%matplotlib notebook
# Generate a bar plot showing the total number of measurements
#taken on each drug regimen using pyplot.
py_measurement = clean_merged_df["Drug Regimen"].value_counts().tolist()
each_regimen = clean_merged_df["Drug Regimen"].unique().tolist()
py_measurement
x_axis = np.arange(len(py_measurement))
plt.bar(x_axis, py_measurement, alpha=0.5, color='b')
tick_locations = [value for value in x_axis]
plt.xticks(tick_locations, each_regimen, rotation=45)
# Set the limits of the x axis
plt.xlim(-0.75, len(x_axis)-0.25)
# Set the limits of the y axis
plt.ylim(0, max(py_measurement)+10)
# Give the chart a title, x label, and y label
plt.title("Total Number of Measurements Take On Each Drug Regimen")
plt.xlabel("Regimen")
plt.ylabel("Total Measurements")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_distribution = clean_merged_df.groupby("Sex")["Mouse ID"].nunique()
gender_distribution
female_percent = (gender_distribution[0] / number_of_mice) * 100
male_percent = (gender_distribution[1] / number_of_mice) * 100
# Labels for the sections of our pie chart
labels = ["Female", "Male"]
# The values of each section of the pie chart
sizes = [female_percent, male_percent]
# The colors of each section of the pie chart
colors = ["lightcoral", "lightskyblue"]
# Tells matplotlib to seperate the "Humans" section from the others
explode = (0.0, 0.1)
plt.pie(sizes, explode=explode, labels=labels, colors=colors,
autopct="%1.1f%%", shadow=True, startangle=140)
plt.show()
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
mean_tumor_volume_data_grouped = mouse_clinical_data.groupby(['Drug','Timepoint']).mean()
mean_tumor_volume_data_grouped
# Convert to DataFrame
mean_tumor_volume_data_grouped_df = pd.DataFrame(mean_tumor_volume_data_grouped)
del mean_tumor_volume_data_grouped_df['Metastatic Sites']
mean_tumor_volume_data_grouped_df = mean_tumor_volume_data_grouped_df.reset_index()
# Preview DataFrame
mean_tumor_volume_data_grouped_df.head()
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
# Calculate standard error on means
sem_tumor_volume_data_grouped = mouse_clinical_data.groupby(['Drug','Timepoint']).sem()
sem_tumor_volume_data_grouped
# Convert to DataFrame
sem_tumor_volume_data_grouped_df = pd.DataFrame(sem_tumor_volume_data_grouped)
del sem_tumor_volume_data_grouped_df['Metastatic Sites']
del sem_tumor_volume_data_grouped_df['Mouse ID']
sem_tumor_volume_data_grouped_df = sem_tumor_volume_data_grouped_df.reset_index()
# Preview DataFrame
sem_tumor_volume_data_grouped_df.head()
# Minor Data Munging to Re-Format the Data Frames
mean_tumor_volume_data_grouped_pivot = mean_tumor_volume_data_grouped_df.pivot_table(index='Timepoint',columns='Drug',values='Tumor Volume (mm3)')
sem_tumor_volume_data_grouped_pivot = sem_tumor_volume_data_grouped_df.pivot_table(index='Timepoint',columns='Drug',values='Tumor Volume (mm3)')
# Preview that Reformatting worked
mean_tumor_volume_data_grouped_pivot.head()
# Generate the Plot (with Error Bars)
plt.errorbar(mean_tumor_volume_data_grouped_pivot.index, mean_tumor_volume_data_grouped_pivot['Capomulin'],
yerr=sem_tumor_volume_data_grouped_pivot['Capomulin'],
color='red', marker='o', markersize=5, linestyle='--', linewidth=0.5)
plt.errorbar(mean_tumor_volume_data_grouped_pivot.index, mean_tumor_volume_data_grouped_pivot['Infubinol'],
yerr=sem_tumor_volume_data_grouped_pivot['Infubinol'],
color='blue', marker='s', markersize=5, linestyle='--', linewidth=0.5)
plt.errorbar(mean_tumor_volume_data_grouped_pivot.index, mean_tumor_volume_data_grouped_pivot['Ketapril'],
yerr=sem_tumor_volume_data_grouped_pivot['Ketapril'],
color='green', marker='D', markersize=5, linestyle='--', linewidth=0.5)
plt.errorbar(mean_tumor_volume_data_grouped_pivot.index, mean_tumor_volume_data_grouped_pivot['Placebo'],
yerr=sem_tumor_volume_data_grouped_pivot['Placebo'],
color='black', marker='d', markersize=5, linestyle='--', linewidth=0.5)
x_lim = len(mean_tumor_volume_data_grouped_pivot.index)
plt.title("Tumor Response to Treatment")
plt.xlabel("Time (Days)")
plt.ylabel("Tumor Volume (mm3)")
plt.legend(['Capomulin', 'Infubinol', 'Ketapril', 'Placebo'], loc='best')
plt.grid()
# Save the Figure
plt.savefig(os.path.join('figures','treatment.png'))
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
 Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
mean_tumor_mresponse_data_grouped = mouse_clinical_data.groupby(['Drug','Timepoint']).mean()
del mean_tumor_mresponse_data_grouped['Tumor Volume (mm3)']
# Convert to DataFrame
mean_tumor_mresponse_data_grouped_df = pd.DataFrame(mean_tumor_mresponse_data_grouped)
#mean_tumor_mresponse_data_grouped_df = mean_tumor_mresponse_data_grouped_df.reset_index()
# Preview DataFrame
mean_tumor_mresponse_data_grouped_df.head()
# Store the Standard Error associated with Met. Sites Grouped by Drug and Timepoint
sem_tumor_mresponse_data_grouped = mouse_clinical_data.groupby(['Drug','Timepoint']).sem()
del sem_tumor_mresponse_data_grouped['Tumor Volume (mm3)']
del sem_tumor_mresponse_data_grouped['Mouse ID']
# Convert to DataFrame
sem_tumor_mresponse_data_grouped_df = pd.DataFrame(sem_tumor_mresponse_data_grouped)
#mean_tumor_mresponse_data_grouped_df = mean_tumor_mresponse_data_grouped_df.reset_index()
# Preview DataFrame
sem_tumor_mresponse_data_grouped_df.head()
# Minor Data Munging to Re-Format the Data Frames_MEAN
mean_tumor_mresponse_data_grouped_pivot = mean_tumor_mresponse_data_grouped_df.pivot_table(index='Timepoint',columns='Drug',values='Metastatic Sites')
sem_tumor_mresponse_data_grouped_pivot = sem_tumor_mresponse_data_grouped_df.pivot_table(index='Timepoint',columns='Drug',values='Metastatic Sites')
# Preview that Reformatting worked
mean_tumor_mresponse_data_grouped_pivot.head()
# Generate the Plot (with Error Bars)
plt.errorbar(mean_tumor_mresponse_data_grouped_pivot.index, mean_tumor_mresponse_data_grouped_pivot['Capomulin'],
yerr=sem_tumor_mresponse_data_grouped_pivot['Capomulin'],
color='red', marker='o', markersize=5, linestyle='--', linewidth=0.5)
plt.errorbar(mean_tumor_mresponse_data_grouped_pivot.index, mean_tumor_mresponse_data_grouped_pivot['Infubinol'],
yerr=sem_tumor_mresponse_data_grouped_pivot['Infubinol'],
color='blue', marker='s', markersize=5, linestyle='--', linewidth=0.5)
plt.errorbar(mean_tumor_mresponse_data_grouped_pivot.index, mean_tumor_mresponse_data_grouped_pivot['Ketapril'],
yerr=sem_tumor_mresponse_data_grouped_pivot['Ketapril'],
color='green', marker='D', markersize=5, linestyle='--', linewidth=0.5)
plt.errorbar(mean_tumor_mresponse_data_grouped_pivot.index, mean_tumor_mresponse_data_grouped_pivot['Placebo'],
yerr=sem_tumor_mresponse_data_grouped_pivot['Placebo'],
color='black', marker='d', markersize=5, linestyle='--', linewidth=0.5)
x_lim = len(mean_tumor_mresponse_data_grouped_pivot.index)
plt.title("Metastatic Spread During Treatment")
plt.xlabel("Treatment Duration (Days)")
plt.ylabel("Met. Sites")
plt.legend(['Capomulin', 'Infubinol', 'Ketapril', 'Placebo'], loc='best')
plt.grid()
# Save the Figure
plt.savefig(os.path.join('figures','spread.png'))
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
 Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
count_mouse_data_grouped = mouse_clinical_data.groupby(['Drug','Timepoint']).count()['Mouse ID']
#count_mouse_data_grouped = count_mouse_data_grouped.reset_index()
# Convert to DataFrame
count_mouse_data_grouped_df = pd.DataFrame({"Mouse Count": count_mouse_data_grouped})
# Preview DataFrame
count_mouse_data_grouped_df.head().reset_index()
# Minor Data Munging to Re-Format the Data Frames_MEAN
count_mouse_data_grouped_pivot = count_mouse_data_grouped_df.pivot_table(index='Timepoint',columns='Drug',values='Mouse Count')
# Preview that Reformatting worked
count_mouse_data_grouped_pivot.head()
# Generate the Plot (Accounting for percentages)
plt.errorbar(count_mouse_data_grouped_pivot.index, count_mouse_data_grouped_pivot['Capomulin'],
color='red', marker='o', markersize=5, linestyle='--', linewidth=0.5)
plt.errorbar(count_mouse_data_grouped_pivot.index, count_mouse_data_grouped_pivot['Infubinol'],
color='blue', marker='s', markersize=5, linestyle='--', linewidth=0.5)
plt.errorbar(count_mouse_data_grouped_pivot.index, count_mouse_data_grouped_pivot['Ketapril'],
color='green', marker='D', markersize=5, linestyle='--', linewidth=0.5)
plt.errorbar(count_mouse_data_grouped_pivot.index, count_mouse_data_grouped_pivot['Placebo'],
color='black', marker='d', markersize=5, linestyle='--', linewidth=0.5)
x_lim = len(count_mouse_data_grouped_pivot.index)
plt.title("Survival During Treatment")
plt.xlabel("Time (Days)")
plt.ylabel("Survival Rate (%)")
plt.legend(['Capomulin', 'Infubinol', 'Ketapril', 'Placebo'], loc='best')
plt.grid()
# Save the Figure
plt.savefig(os.path.join('figures','survival.png'))
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
 Summary Bar Graph
###Code
# Calculate the percent changes for each drug
percent_change_by_drug = ((mean_tumor_volume_data_grouped_pivot.iloc[-1]-mean_tumor_volume_data_grouped_pivot.iloc[0])/mean_tumor_volume_data_grouped_pivot.iloc[0]) * 100
# Display the data to confirm
percent_change_by_drug
# Store all Relevant Percent Changes into a Tuple
drug_tuple = ('Capomulin', 'Infubinol','Ketapril','Placebo')
# Splice the data between passing and failing drugs
passing = percent_change_by_drug < 0
# Orient widths. Add labels, tick marks, etc.
change_list = [(percent_change_by_drug[drug])for drug in drug_tuple]
change_plt = plt.bar(drug_tuple,change_list,width=-1,align='edge',color=passing.map({True:'g',False:'r'}))
plt.xticks()
plt.ylim(-30,70)
plt.ylabel('% Tumor Volume Change')
plt.title('Tumor Change Over 45 Day Treatment')
# Use functions to label the percentages of changes
def autolabel(rects):
for rect in rects:
height = rect.get_height()
if height > 0:
label_position = 3
else:
label_position = -7
plt.text(rect.get_x() + rect.get_width()/2., label_position,
'%d' % int(height)+'%',color='white',
ha='center', va='bottom')
# Call functions to implement the function calls
autolabel(change_plt)
plt.grid()
# Save the Figure
plt.savefig(os.path.join('figures','change.png'))
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown

###Code
# Three observations about the results of the study:
# 1. Capomulin was the only drug which showed positive result and tumor reduction.
# 2. Metastatic spread was relatively better for Infubinol than for Ketapril and Placebo, however, after around 32nd day of treatment showed the lowest survival rate of the cohort.
# 3. At the end of the treatment Ketapril showed the same survival rate and metastatic spread as Placebo.
###Output
_____no_output_____
###Markdown
Tumor Response to Treatment
###Code
# Store the Mean Tumor Volume Data Grouped by Drug and Timepoint
drugdata = all_data.groupby(['Drug','Timepoint'])
# Convert to DataFrame
Tumorvolume = pd.DataFrame(drugdata['Tumor Volume (mm3)'].mean())
Tumorvolume.reset_index(inplace=True)
# Preview DataFrame
Tumorvolume.head()
# Store the Standard Error of Tumor Volumes Grouped by Drug and Timepoint
Tumor_error= pd.DataFrame(drugdata['Tumor Volume (mm3)'].sem())
Tumor_error.rename(columns={'Tumor Volume (mm3)':'Tumor vol Std.error'},inplace = True)
Tumor_error.reset_index(inplace=True)
# Preview DataFrame
Tumor_error.head()
# Minor Data Munging to Re-Format the Data Frames
Tumor_response_pivot = pd.pivot_table(Tumorvolume, index=["Timepoint"], values="Tumor Volume (mm3)", columns=["Drug"])
Tumor_response_pivot.head()
# Preview that Reformatting worked
# set up lists for the scatter plot
drugs = ['Capomulin','Infubinol', 'Ketapril','Placebo']
colors = ['red','blue','green','black']
formats = ['o','^','s','D']
#loop through the drugs to include on the plot
for i in range(0,len(drugs)):
#x-axis is timepoint for the particular drug
drugdata_to_plot = Tumorvolume.loc[Tumorvolume['Drug'] == drugs[i],:]
x_axis = drugdata_to_plot['Timepoint']
#y-axis is tumor volume
y_axis = drugdata_to_plot['Tumor Volume (mm3)']
#errors is the standard error
err_data_to_plot = Tumor_error.loc[Tumor_error['Drug'] == drugs[i],:]
errors = err_data_to_plot['Tumor vol Std.error']
# Show the Figure
#plot the data and the error
plt.errorbar(x_axis, y_axis, yerr=errors, fmt=formats[i], marker = formats[i], color=colors[i],
alpha=0.7, label=drugs[i],ls='dashed')
#plt.scatter(x_axis, y_axis, marker= formats[i], facecolors= colors[i], edgecolors="black",
# s=x_axis, alpha=1)
# Add legend
plt.legend(loc="best")
# Add labels
plt.title('Tumor Response to Treatment')
plt.xlabel('Time (days)')
plt.ylabel('Tumor Volume (mm3)')
# Add x limits and y limits
plt.xlim(0,50)
plt.ylim(30,75)
# Add gridlines
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Metastatic Response to Treatment
###Code
# Store the Mean Met. Site Data Grouped by Drug and Timepoint
met_site = drugdata['Metastatic Sites'].mean()
# Convert to DataFrame
met_site = pd.DataFrame(met_site)
met_site.rename(columns={'Metastatic Sites':'Metastatic mean'},inplace = True)
met_site.reset_index(inplace=True)
# Preview DataFrame
met_site.head()
#compute the error from the means
met_error = drugdata['Metastatic Sites'].sem()
met_error = pd.DataFrame(met_error)
met_error.rename(columns={'Metastatic Sites':'met errors'},inplace = True)
met_error.reset_index(inplace=True)
met_error.head()
# set up lists for the scatter plot
drugs = ['Capomulin','Infubinol', 'Ketapril','Placebo']
colors = ['red','blue','green','black']
formats = ['o','^','s','D']
#loop through the drugs to include on the plot
for i in range(0,len(drugs)):
#x-axis is timepoint for the particular drug
drug_data_to_plot = met_site.loc[met_site['Drug'] == drugs[i],:]
x_axis = drug_data_to_plot['Timepoint']
#y-axis is metastatic sites
y_axis = drug_data_to_plot['Metastatic mean']
#errors is the standard error
err_data_to_plot = met_error.loc[met_error['Drug'] == drugs[i],:]
errors = err_data_to_plot['met errors']
#plot the data and the error
plt.errorbar(x_axis, y_axis, yerr=errors, fmt=formats[i], marker = formats[i], color=colors[i],
alpha=0.7, label=drugs[i],ls='dashed')
# Add legend
plt.legend(loc="best")
# Add labels
plt.title('Metastatic Response to Treatment')
plt.xlabel('Time (days)')
plt.ylabel('Metastatic Sites')
# Add x limits and y limits
plt.xlim(0,50)
plt.ylim(0,4)
# Add gridlines
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Survival Rates
###Code
# Store the Count of Mice Grouped by Drug and Timepoint (W can pass any metric)
mouse = all_data.groupby(["Drug", "Timepoint"])
mouse.head()
mouse['Mouse ID'].count()
survival = pd.DataFrame(mouse['Mouse ID'].count())
survival.head()
survival.rename(columns={'Mouse ID':'MouseCount'},inplace = True)
survival.reset_index(inplace=True)
survival.head()
#get drug and number of mice at timepoint 0
orig_mice = survival.loc[(survival["Timepoint"] == 0),:]
orig_mice.rename(columns={'MouseCount':'Total_mice'},inplace = True)
del orig_mice['Timepoint']
orig_mice
#merge the table with the original total mouse count with survival_drug
merge_survival = pd.merge(survival, orig_mice, on="Drug")
merge_survival.head()
#store the percentage of mice still alive
merge_survival["%alive"] = merge_survival['MouseCount'] / merge_survival['Total_mice'] * 100
merge_survival.head()
# Generate the Plot (Accounting for percentages)
#set up lists for the scatter plot
drugs = ['Capomulin','Infubinol', 'Ketapril','Placebo']
colors = ['red','blue','green','black']
formats = ['o','^','s','D']
#loop through the drugs to include on the plot
for i in range(0,len(drugs)):
#x-axis is timepoint for the particular drug
drug_data_to_plot = merge_survival.loc[merge_survival['Drug'] == drugs[i],:]
x_axis = drug_data_to_plot['Timepoint']
#y-axis is metastatic sites
y_axis = drug_data_to_plot['%alive']
#errors is the standard error
#err_data_to_plot = met_error.loc[met_error['Drug'] == drugs[i],:]
#errors = err_data_to_plot['met errors']
#plot the data and the error
plt.errorbar(x_axis, y_axis, fmt=formats[i], marker = formats[i], color=colors[i],
alpha=0.7, label=drugs[i],ls='dashed')
# Add legend
plt.legend(loc="best")
# Add label
plt.title("Survival During Treatment")
plt.xlabel('Time (days)')
plt.ylabel('Survival Rates(%)')
# Add x limits and y limits
plt.xlim(0,50)
plt.ylim(0,120)
# Add gridlines
plt.grid()
plt.show()
# Save the Figure
# Show the Figure
plt.show()
###Output
_____no_output_____
###Markdown
Summary Bar Graph
###Code
# Calculate the percent changes for each drug
Tumorvolume_change_percentage = ((Tumor_response_pivot.iloc[-1]-Tumor_response_pivot.iloc[0])/Tumor_response_pivot.iloc[0]) * 100
Tumorvolume_change_percentage.head()
# Store all Relevant Percent Changes into a Tuple
Tumorpercent_change = list(Tumorvolume_change_percentage.values)
Tumorpercent_change
# Display the data to confirm
drug=Tumorvolume_change_percentage.keys()
drug
drug_list=['Capomulin','Infubinol','Ketapril','Placebo']
# Splice the data between passing and failing drugs
passing = list(Tumorvolume_change_percentage.values > 0)
passing
colors = ['r' if tpc > 0 else 'g' for tpc in Tumorpercent_change]
colors
#x_axis = np.arange(len(Tumorpercent_change))
#x_axis
drug_list = ['Capomulin','Infubinol','Ketapril','Placebo']
change_list = [(Tumorvolume_change_percentage[drug])for drug in drug_list]
change_plt = plt.bar(drug_list,change_list,width=-1,align='edge',color=colors)
plt.title("Tumor Change Over 45 Days Treatment")
plt.ylabel("% Tumor Volume Change")
plt.ylim(-30,70)
plt.grid(True,linestyle='dashed')
# Use functions to label the percentages of changes
def autolabel(rects):
for rect in rects:
height = rect.get_height()
if height > 0:
label_position = 2
else:
label_position = -8
plt.text(rect.get_x() + rect.get_width()/2., label_position,
'%d' % int(height)+'%',color='white',
ha='center', va='bottom')
# Call functions to implement the function calls
autolabel(change_plt)
# Save an image of the chart and print it to the screen
plt.savefig("Pymaceuticals_bargraph.png")
plt.show()
###Output
_____no_output_____
###Markdown
Observations and Insights
###Code
%matplotlib notebook
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_mouse_df = pd.merge(mouse_metadata,study_results,on="Mouse ID", how="outer")
combined_mouse_df.head()
# Checking the number of mice.
combined_mouse_df.count()
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
combined_mouse_df.duplicated(subset=["Mouse ID", "Timepoint"])
# Optional: Get all the data for the duplicate mouse ID.
combined_mouse_df[combined_mouse_df.duplicated(["Mouse ID"])]
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
combined_mouse_df = combined_mouse_df.drop_duplicates(subset="Mouse ID", keep="last")
combined_mouse_df
# Checking the number of mice in the clean DataFrame.
#combined_mouse_df[combined_mouse_df.drop_duplicates(["Mouse ID"])]
combined_mouse_df.count()
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
#Summary statistics table of mean, median,variance,standard,deviation, and SEM
tumorstats = pd.DataFrame(combined_mouse_df.groupby("Drug Regimen").count())
#Use groupby to create summary stats by drug regime, add results into columns in summarystats
tumorstats["Mean"] = pd.DataFrame(combined_mouse_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].mean())
tumorstats["Median"] = pd.DataFrame(combined_mouse_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].median())
tumorstats["Standard Deviation"] = pd.DataFrame(combined_mouse_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].std())
tumorstats["Variance"] = pd.DataFrame(combined_mouse_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].var())
tumorstats["SEM"] = pd.DataFrame(combined_mouse_df.groupby("Drug Regimen")["Tumor Volume (mm3)"].sem())
#Columns
tumorstats = tumorstats[["Mouse ID", "Mean", "Median", "Standard Deviation", "Variance", "SEM"]]
tumorstats.reset_index()
tumorstats.head()
#Summary statistics table of mean, median,variance,standard,deviation, and SEM
#merged_grouped_df = combined_mouse_df.groupby(["Drug Regimen"])
#tumor_mean = combined_mouse_df.groupby('Drug Regimen').mean()['Tumor Volume (mm3)']
#tumor_mean = combined_mouse_df.groupby('Drug Regimen').median()['Tumor Volume (mm3)']
#tumor_mean = combined_mouse_df.groupby('Drug Regimen').std()['Tumor Volume (mm3)']
#tumor_mean = combined_mouse_df.groupby('Drug Regimen').var()['Tumor Volume (mm3)']
#tumor_mean = combined_mouse_df.groupby('Drug Regimen').sem()['Tumor Volume (mm3)']
#Convert to DataFrame
#tumor_df = pd.DataFrame(tumor_mean)
# DataFrame
#tumor_summary_df=tumor_df.copy()
#tumor_summary_df
#tumor_summary.reset_index()
#merged_grouped_df = combined_mouse_df.groupby(["Drug Regimen"])
#tumor_mean = merged_grouped_df["Tumor Volume (mm3)"].mean()
#Convert to DataFrame
#tumor_df = pd.DataFrame(tumor_mean)
# DataFrame
#tumor_summary_df=tumor_df.copy()
#tumor_summary_df
#tumorstats.reset_index()
#len(tumorstats)
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas.
#mice_points['Count'] = combined_mouse_df.groupby('Drug Regimen').count()['Tumor Volume (mm3)'].values
#mice_points
#mice_points.plot.bar('Drug Regimen','Count',alpha = 0.5)
#plt.show()
mice_counts =combined_mouse_df["Drug Regimen"].value_counts()
#x_axis = clean_df["Drug Regimen"].value_counts()
mice_counts.plot(kind="bar")
plt.xlabel("Drug Regimen")
plt.xticks(rotation=45)
plt.ylabel("No.of data points")
plt.show()
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pyplot.
mice_counts =combined_mouse_df["Drug Regimen"].value_counts()
mice_counts.plot(kind="bar")
plt.xlabel("Drug Regimen")
plt.xticks(rotation=45)
plt.ylabel("No.of data points")
plt.show()
#pandas_plot = mice_counts.plot(kind="bar", figsize=(8,5))
#plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
# Plotting Data
labels = 'Male', 'Female'
sex_count = combined_mouse_df["Sex"].value_counts()
colors = ['blue', 'orange']
explode = (0.05, 0) # explode 1st slice
# Plot the pie chart
piplot =sex_count.plot(kind="pie",explode=explode, labels=labels,
colors=colors, autopct='%1.1f%%', shadow=True, startangle=140)
piplot.set_xlabel("")
piplot.set_ylabel("")
piplot.set_title("Mice Gender Distribution")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pyplot
# Plotting Data
labels = 'Male', 'Female'
sex_count = combined_mouse_df["Sex"].value_counts()
colors = ['blue', 'orange']
#explode = (0.05, 0)
# Plot pie chart
plt.title("Mice Gender Distribution")
plt.pie(sex_count, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=140)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Observations1. Ketapril resulted in the greatest increases in tumor volume size during the period, and also had the highest number of metastatic site increases. Though it wasn't always the lowest performer, it is clear that overall, Ketapril was among the worst performing drugs. 2. In terms of tumor volume reduction, survival rate and number of metastatic site occurences, Capomulin was the best performing drug. In certain metrics, Capomulin was outperformed, but it had the overall best performance when taking all into account.3. Both Propriva and Infubinol saw the lowest survival rates of the drugs tested and even lower than even the placebo, suggesting a clear relationship between use and accelerated increased mortality rates.
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import os
clinicaltrial_data_csv = "/Users/ZGS/Documents/Data_Bootcamp/matplotlib_homework/Pymaceuticals/raw_data/clinicaltrial_data.csv"
mouse_drug_data_csv = "/Users/ZGS/Documents/Data_Bootcamp/matplotlib_homework/Pymaceuticals/raw_data/mouse_drug_data.csv"
trials_df = pd.read_csv(clinicaltrial_data_csv)
mouse_df = pd.read_csv(mouse_drug_data_csv)
trials_df.head()
mouse_df.head()
combined_df = trials_df.merge(mouse_df, on='Mouse ID')
combined_df.head()
combined_df.Drug.unique()
combined_df["Metastatic Sites"].unique()
#scatter plot that shows how the tumor volume changes over time for each treatment.
volume_avg = pd.DataFrame(combined_df.groupby(['Drug','Timepoint']).mean()['Tumor Volume (mm3)']).unstack(level = 0)
volume_sem = pd.DataFrame(combined_df.groupby(['Drug','Timepoint']).sem()['Tumor Volume (mm3)']).unstack(level = 0)
volume_avg.head()
volume_sem.head()
plt.figure(figsize = (15,10))
plt.title('Tumor Volume Over Time', fontdict = {'fontsize': 18, 'fontweight': 'bold'})
plt.xlabel('Days', fontdict = {'fontsize': 15})
plt.ylabel('Tumor Volume (mm3)', fontdict = {'fontsize': 15})
plt.xticks(np.arange(0, volume_avg.index.max()+3 , 5)) # location of separators for timepoint
plt.xlim(0, volume_avg.index.max() + 1)
markers = ['o', 's', '^', 'd']
xvals = volume_avg.index
count = 0
for c in volume_avg.columns:
plt.errorbar(xvals,
volume_avg[c],
volume_sem[c],
marker = markers[count],
capthick = 1, #for caps on error bars
capsize = 3) # for caps on error bars
count += 1
legend = plt.legend(numpoints = 2, # gives two symbols in legend
frameon = True,
markerscale = 1.5)
plt.show()
# Metastatic Site changes over time for each treatment
number_spread = pd.DataFrame(combined_df.groupby(['Drug','Timepoint']).mean()['Metastatic Sites']).unstack(level = 0)
number_spread_error = pd.DataFrame(combined_df.groupby(['Drug','Timepoint']).sem()['Metastatic Sites']).unstack(level = 0)
number_spread.head()
number_spread_error.head()
plt.figure(figsize = (15,10))
plt.title('Number of Metastatic Sites Over Time', fontdict = {'fontsize': 18, 'fontweight': 'bold'})
plt.xlabel('Days', fontdict = {'fontsize': 15})
plt.ylabel('Number of Metastatics Sites', fontdict = {'fontsize': 15})
plt.xticks(np.arange(0,number_spread.index.max() + 3 ,5))
plt.xlim(0, number_spread.index.max()+1)
plt.ylim(0, number_spread.max().max() + number_spread_error.max().max() + .1)
count = 0
xvals = number_spread.index
for c in number_spread:
plt.errorbar(xvals,
number_spread[c],
number_spread_error[c],
linestyle = '--',
marker = markers[count],
capthick = 1,
capsize = 3)
count += 1
legend = plt.legend(numpoints = 2,
frameon = True,
markerscale = 1.5)
plt.show()
#Mice survival rate during treatment
number_mice = combined_df.groupby(['Drug','Timepoint']).count()['Mouse ID'].unstack(level = 0)
number_mice
plt.figure(figsize = (15,10))
plt.title('Survival Rate During Treatment', fontdict = {'fontsize': 18, 'fontweight': 'bold'})
plt.xlabel('Days', fontdict = {'fontsize': 15})
plt.ylabel('Survival Rate (%)', fontdict = {'fontsize': 15})
plt.xlim(0, number_mice.index.max())
xvals = number_mice.index
count = 0
for c in number_mice:
yvals = number_mice[c]/number_mice.loc[0,c] * 100
plt.plot(xvals,
yvals,
linestyle = '--',
marker = markers[count],
)
count += 1
#legend options
lg = plt.legend(numpoints = 2,
frameon = True,
markerscale = 1.5)
plt.show()
#bar graph that compares the total % tumor volume change for each drug across the full 45 days.
tumor_change = ((volume_avg.loc[45, :] - volume_avg.loc[0, :])/volume_avg.loc[0, :] * 100).reset_index()
tumor_change_drug = ((volume_avg.loc[45, :] - volume_avg.loc[0, :])/volume_avg.loc[0, :] * 100).reset_index()['Drug']
y = list(tumor_change[0])
x = list(tumor_change['Drug'])
plt.figure(figsize = (15,10))
plt.title('Tumor Volume Change over 45 Day Treatment', fontdict = {'fontsize': 16, 'fontweight': 'bold'})
plt.ylabel('Survival Rate (%)',fontdict = {'fontsize': 14, 'fontweight': 'bold'})
plt.xlabel('Drug',fontdict = {'fontsize': 14, 'fontweight': 'bold'})
plt.axhline(y=0, color = 'black') #adds a horizontal line at zero
width = 15/15.5
plt.bar(x, y, width, color=['red' if y >=0 else 'green' for y in y])
plt.show()
###Output
_____no_output_____
###Markdown
Observations and Insights Mouse subjects were split fairly evenly between male and female for the overall experiment.Two of the drug regimens decreased tumor volume significantly relative to the placebo trials: Ramicane and Capomulin. Incidentally Ramicane and Capomulin were the two drugs with the most overall data points studied Dependencies and starter code
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# mouse_metadata.head()
# study_results.head()
# Combine the data into a single dataset
full_data = pd.merge(mouse_metadata, study_results, on="Mouse ID")
full_data
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
# Generate a summary statistics table of the tumor volume for each regimen's mean, median,
# variance, standard deviation, and SEM
# unique_drugs = full_data["Drug Regimen"].unique()
# unique_drugs.sort()
# print(unique_drugs)
data_df = pd.DataFrame()
regimen_data = full_data.groupby("Drug Regimen")
data_df["Tumor Volume Mean"] = regimen_data["Tumor Volume (mm3)"].mean().round(decimals=2)
data_df["Tumor Volume Median"] = regimen_data["Tumor Volume (mm3)"].median().round(decimals=2)
data_df["Tumor Volume Variance"] = regimen_data["Tumor Volume (mm3)"].var().round(decimals=2)
data_df["Tumor Volume SD"] = regimen_data["Tumor Volume (mm3)"].std().round(decimals=2)
data_df["Tumor Volume SEM"] = regimen_data["Tumor Volume (mm3)"].sem().round(decimals=2)
data_df
###Output
_____no_output_____
###Markdown
Bar plots
###Code
# Generate a bar plot showing number of data points for each treatment regimen using pandas
data_df["Count"] = regimen_data["Tumor Volume (mm3)"].count()
data_df.reset_index(inplace=True)
data_df
data_df.plot.bar(x="Drug Regimen", y="Count")
plt.show()
# # Generate a bar plot showing number of data points for each treatment regimen using pyplot
plt.bar(data_df["Drug Regimen"], data_df["Count"])
plt.xticks(rotation=90)
###Output
_____no_output_____
###Markdown
Pie plots
###Code
# Generate a pie plot showing the distribution of female versus male mice using pandas
needfully_gendered = full_data.drop_duplicates("Mouse ID")
# !!! A small note: the rubric's notes on this section say, " Two bar plots are...""
gender_group = needfully_gendered.groupby("Sex")
gender_df = pd.DataFrame(gender_group["Sex"].count())
# print(gender_df)
gender_df.plot.pie(subplots=True)
# Generate a pie plot showing the distribution of female versus male mice using pyplot
plt.pie(gender_df, labels=["Female","Male"])
plt.title("Sex")
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: MatplotlibDeprecationWarning: Non-1D inputs to pie() are currently squeeze()d, but this behavior is deprecated since 3.1 and will be removed in 3.3; pass a 1D array instead.
###Markdown
Quartiles, outliers and boxplots
###Code
# Calculate the final tumor volume of each mouse in the four most promising treatment regimens.
# Calculate the IQR and quantitatively determine if there are any potential outliers.
promising=data_df.sort_values(by="Tumor Volume Mean")
promising_drugs = promising["Drug Regimen"][0:4]
full_data["Promising Drug"] = full_data["Drug Regimen"].isin(promising_drugs)
promising_df = full_data.loc[full_data["Promising Drug"],:].drop_duplicates("Mouse ID",keep="last").reset_index(drop=True)
promising_df.drop(columns=["Sex","Age_months","Weight (g)","Timepoint","Metastatic Sites","Promising Drug"],inplace=True)
promising_df.head()
print(promising_drugs)
quartiles = promising_df["Tumor Volume (mm3)"].quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
outliers_df = promising_df.loc[(promising_df["Tumor Volume (mm3)"] > upper_bound) | (promising_df["Tumor Volume (mm3)"] < lower_bound), :]
outliers_df
#no outliers present among Capomulin, Ramicane, Propriva, Ceftamin
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
capo_final = promising_df.loc[promising_df["Drug Regimen"] == "Capomulin"]
rami_final = promising_df.loc[promising_df["Drug Regimen"] == "Ramicane"]
prop_final = promising_df.loc[promising_df["Drug Regimen"] == "Propriva"]
ceft_final = promising_df.loc[promising_df["Drug Regimen"] == "Ceftamin"]
fig, ax = plt.subplots() #each variable contains a set of attribute/methods that are manipulateable or callable. "Fig" can change formatting, "Ax" is about the drawing
ax.boxplot([capo_final["Tumor Volume (mm3)"],rami_final["Tumor Volume (mm3)"],prop_final["Tumor Volume (mm3)"],ceft_final["Tumor Volume (mm3)"]])
ax.set_xticklabels(promising_drugs)
plt.title("Variance in Tumor Volume for Most Promising Regimens", x=.5, y=1)
plt.subplots_adjust(top = 0.99, bottom=0.01, hspace=.25)
###Output
_____no_output_____
###Markdown
Line and scatter plots
###Code
# Generate a line plot of time point versus tumor volume for *a mouse* treated with Capomulin (s185)
s185 = full_data.loc[full_data["Mouse ID"] == "s185"]
s185
plt.plot(s185["Timepoint"], s185["Tumor Volume (mm3)"])
plt.xlabel("Timepoint")
plt.ylabel("Tumor Volume (mm3)")
plt.title("Capomulin Tumor Volume Over Time: Case Study (s185F)")
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capomulin = full_data.loc[full_data["Drug Regimen"] == "Capomulin"]
capo_avgs = capomulin.groupby(capomulin["Mouse ID"]).mean()
avg_volume = capo_avgs["Tumor Volume (mm3)"].mean()
plt.figure(figsize=(10, 6))
plt.scatter(capo_avgs["Weight (g)"], capo_avgs["Tumor Volume (mm3)"])
plt.axhline(avg_volume, c="red", alpha=70)
plt.text(25.7,40.7,f"Average Tumor Volume ({round(avg_volume,2)})")
plt.xlabel("Weight (g)")
plt.ylabel("Tumor Volume (mm3)")
plt.title("Tumor Volume by Weight")
# again... not totally sure what output is desired here. "Versus"? Hopefully you think this is cute.
# Calculate the correlation coefficient and linear regression model for mouse weight
# and average tumor volume for the Capomulin regimen
weight = capomulin.groupby(capomulin["Mouse ID"])["Weight (g)"].mean()
volume = capomulin.groupby(capomulin["Mouse ID"])["Tumor Volume (mm3)"].mean()
slope, int, r, p, std_err = st.linregress(weight, volume)
fit = slope * weight + int
plt.scatter(weight,volume)
plt.xlabel("Mouse Weight")
plt.ylabel("Tumor Volume (mm3)")
plt.plot(weight,fit,"--")
plt.xticks(weight, rotation=90)
plt.show()
corr = round(st.pearsonr(weight,volume)[0],2)
print(f'The correlation between weight and tumor volume is {corr}')
###Output
The correlation between weight and tumor volume is 0.84
###Markdown
Observations and Insights 1. The drug of interest, Capomulin and antoher drug, Ramicane produced the lowest final tumor values, on average, of the four best regimens in the study.2. Ramicane, actually performed slightly better than Capomulim in terms of the overall stats in terms of final average tumor value in the mice.3. Capomulin and Ramicane had the most mice treatments (230 and 228).4. Based on the correlation between mouse weight and the average tumor volume being 0.84, their a positibe correcltion between avg tumor volumen and weight within the subject mice.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
trial_data = pd.merge(mouse_metadata, study_results, on= 'Mouse ID')
trial_data.head()
# Checking the number of mice.
number_of_mice = trial_data["Mouse ID"].value_counts()
number_of_mice
# Checking the number of mice.
# number_of_mice = trial_data["Mouse ID"].count()
# number_of_mice
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
# trial_data.set_index('Mouse ID', inplace=True)
# trial_data.drop('g989', inplace=True)
# trial_data.reset_index(inplace=True)
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
drop_dup_mouse_id1 = trial_data.loc[trial_data.duplicated(subset=['Mouse ID', 'Timepoint',]),'Mouse ID'].unique()
drop_dup_mouse_id1
# Optional: Get all the data for the duplicate mouse ID.
print(trial_data.loc[trial_data['Mouse ID']=='g989'])
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
trial_data.set_index('Mouse ID', inplace=True)
trial_data.drop('g989', inplace=True)
trial_data.reset_index(inplace=True)
# Checking the number of mice in the clean DataFrame.
number_of_mice = trial_data["Mouse ID"].value_counts()
number_of_mice
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
mean = trial_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].mean()
median = trial_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].median()
variance = trial_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].var()
stdv = trial_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].std()
sem = trial_data.groupby('Drug Regimen')['Tumor Volume (mm3)'].sem()
summary_df = pd.DataFrame({"Mean": mean, "Median": median, "Variance": variance, "Standard Deviation": stdv,
"SEM": sem})
summary_df
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas.
regimen_data_points = trial_data.groupby(["Drug Regimen"]).count()["Mouse ID"]
regimen_data_points
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pandas.
regimen_data_points.plot(kind="bar", figsize=(10,5))
#set chart title
plt.title("Number of Mice Per Regiment")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Mice")
#show chart and set layout
plt.show()
plt.tight_layout()
# Generate a bar plot showing the number of mice per time point for each treatment throughout the course of the study using pyplot.
#Create an arraw with the datapoints
users = [230, 178, 178, 188, 186, 181, 161, 228, 181, 182]
#Set the x_axis to be the amount of the Data Regimen
x_axis = np.arange(len(regimen_data_points))
plt.bar(x_axis, users, color='b', alpha=0.75, align='center')
tick_locations = [value for value in x_axis]
plt.xticks(tick_locations, ['Capomulin', 'Ceftamin', 'Infubinol', 'Ketapril', 'Naftisol', 'Placebo', 'Propriva', 'Ramicane', 'Stelasyn', 'Zoniferol'], rotation='vertical')
plt.xlim(-0.75, len(x_axis)-0.25)
plt.ylim(0, max(users)+10)
plt.title("Number of Mice Per Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Data Points")
# Generate a pie plot showing the distribution of female versus male mice using pandas
#Group by "Mouse ID" and "Sex" to find the unique number of male vs female
groupby_gender = trial_data.groupby(["Mouse ID","Sex"])
groupby_gender
mouse_gender_df = pd.DataFrame(groupby_gender.size())
#Create the dataframe with total count of Female and Male mice
mouse_gender = pd.DataFrame(mouse_gender_df.groupby(["Sex"]).count())
mouse_gender.columns = ["Total Count"]
#create and format the percentage of female vs male
mouse_gender["Percentage of Sex"] = (100*(mouse_gender["Total Count"]/mouse_gender["Total Count"].sum()))
#format the "Percentage of Sex" column
mouse_gender["Percentage of Sex"] = mouse_gender["Percentage of Sex"]
#gender_df
mouse_gender
# Generate a pie plot showing the distribution of female versus male mice using pandas
plot = mouse_gender.plot.pie(y='Total Count', startangle=140, figsize=(5,5), shadow=True, autopct="%1.1f%%")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
# Create Labels for the sections of the pie
labels = ["Female","Male"]
#List the values of each section of the pie chart
sizes = [49.596774,50.403226]
#Create the pie chart based upon the values
plt.pie(sizes, labels=labels, autopct="%1.1f%%", shadow=True, startangle=140)
#Set equal axis
plt.axis("equal")
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
# Sort data by Drug Regime, Mouse ID and Timepoint
sorted_df = trial_data.sort_values(["Drug Regimen", "Mouse ID", "Timepoint"], ascending=True)
# Select final volume of each mouse
max_df = sorted_df.loc[sorted_df["Timepoint"] == 45]
max_df.head().reset_index()
# Select data for Capomulin regimen and reset index
cap_data_df = max_df[max_df['Drug Regimen'].isin(['Capomulin'])]
cap_data_df.head().reset_index()
# Convert column "Tumor Volume" of the Capomulin regimen into a dataframe object
cap_list = cap_data_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
cap_list = cap_list["Tumor Volume (mm3)"]
cap_list
# If the data is in a dataframe, we use pandas to give quartile calculations
quartiles = cap_list.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of tumor volume is: {lowerq}")
print(f"The upper quartile of tumor volume is: {upperq}")
print(f"The interquartile range of tumor volume is: {iqr}")
print(f"The the median of tumor volume is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Select data for Ramicane regimen and reset index
ram_data_df = max_df[max_df['Drug Regimen'].isin(['Ramicane'])]
ram_data_df.head().reset_index()
# Convert column "Tumor Volume" of the Ramicane regimen into a dataframe object
ram_list = ram_data_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
ram_list = ram_list["Tumor Volume (mm3)"]
ram_list
# If the data is in a dataframe, we use pandas to give quartile calculations
quartiles = ram_list.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of temperatures is: {lowerq}")
print(f"The upper quartile of temperatures is: {upperq}")
print(f"The interquartile range of temperatures is: {iqr}")
print(f"The the median of temperatures is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Select data for Infubinol regimen and reset index
inf_data_df = max_df[max_df['Drug Regimen'].isin(['Infubinol'])]
inf_data_df.head(10).reset_index()
# Convert column "Tumor Volume" of the Infubinol regimen into a dataframe object
inf_list = inf_data_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
inf_list = inf_list["Tumor Volume (mm3)"]
inf_list
# If the data is in a dataframe, we use pandas to give quartile calculations
quartiles = inf_list.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of temperatures is: {lowerq}")
print(f"The upper quartile of temperatures is: {upperq}")
print(f"The interquartile range of temperatures is: {iqr}")
print(f"The the median of temperatures is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Select data for Ceftamin regimen and reset index
cef_data_df = max_df[max_df['Drug Regimen'].isin(['Ceftamin'])]
cef_data_df.head().reset_index()
# Convert column "Tumor Volume" of the Ceftamin regimen into a dataframe object
cef_list = cef_data_df.sort_values(["Tumor Volume (mm3)"], ascending=True).reset_index()
cef_list = cef_list["Tumor Volume (mm3)"]
cef_list
# If the data is in a dataframe, we use pandas to give quartile calculations
quartiles = cef_list.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of temperatures is: {lowerq}")
print(f"The upper quartile of temperatures is: {upperq}")
print(f"The interquartile range of temperatures is: {iqr}")
print(f"The the median of temperatures is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
max_df.head()
data_to_plot = [cap_list, ram_list, inf_list, cef_list]
fig1, ax1 = plt.subplots()
ax1.set_title('Tumors Vol Plot Per Regimen')
ax1.set_ylabel('Final Tumor Volume (mm3)')
ax1.set_xlabel('Drug Regimen')
ax1.boxplot(data_to_plot, labels=["Capomulin","Ramicane","Infubinol","Ceftamin",])
plt.savefig('boxplot')
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
cap_df = trial_data.loc[trial_data["Drug Regimen"] == "Capomulin",:]
forline_df = cap_df.loc[cap_df["Mouse ID"] == "g288",:]
forline_df.head()
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
x_axisTP = forline_df["Timepoint"]
tumsiz = forline_df["Tumor Volume (mm3)"]
plt.title('Capomulin treatmeant of mouse g288')
plt.plot(x_axisTP, tumsiz,linewidth=2, markersize=12)
plt.xlabel('Timepoint (Days)')
plt.ylabel('Tumor Volume (mm3)')
plt.savefig('linechart')
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
capavg = cap_df.groupby(['Mouse ID']).mean()
plt.scatter(capavg['Weight (g)'],capavg['Tumor Volume (mm3)'])
plt.xlabel('Weight (g)')
plt.ylabel('Average Tumor Volume (mm3)')
plt.savefig('scatterplot')
plt.show()
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient for mouse weight and average tumor volume for the Capomulin regimen
corr=round(st.pearsonr(capavg['Weight (g)'],capavg['Tumor Volume (mm3)'])[0],2)
print(f"The correlation between mouse weight and average tumor volume is {corr}")
# Calculate the linear regression model for mouse weight and average tumor volume for the Capomulin regimen
model=st.linregress(capavg['Weight (g)'],capavg['Tumor Volume (mm3)'])
model
#capavg['Weight (g)']
mslope = 0.9544396890241045
bintercept = 21.552160532685015
#plot
y_values = capavg['Weight (g)']*mslope+bintercept
plt.scatter(capavg['Weight (g)'],capavg['Tumor Volume (mm3)'])
plt.plot(capavg['Weight (g)'],y_values,color="red")
plt.xlabel('Weight(g)')
plt.ylabel('Average Tumore Volume (mm3)')
plt.savefig('linearregression')
plt.show()
###Output
_____no_output_____ |
bases/br_mc_auxilio_emergencial/code/[dados] br_mc_auxilio_emergencial.ipynb | ###Markdown
**Auxílio Emergencial 2020**
###Code
abril = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/input/202004_AuxilioEmergencial.zip'
extract = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/temp/mes=2020-04'
with zipfile.ZipFile(abril, 'r') as zip_ref:
zip_ref.extractall(extract)
for uf in ufs:
auxilio_abril = pd.DataFrame()
ch=0
for chunk in pd.read_csv(extract+'/202004_AuxilioEmergencial.csv',sep=';', encoding='latin-1', dtype='string', chunksize=500000):
ch = ch + 1
print("Fazendo chunk parte {}".format(ch))
chunk.rename(columns=rename, inplace=True)
chunk = chunk[chunk['sigla_uf']==uf]
chunk['mes'] = pd.to_datetime(chunk['mes'], format='%Y%m', errors='coerce').dt.to_period('m')
chunk['valor_beneficio'] = chunk['valor_beneficio'].apply(lambda x: str(x).replace(',','.'))
chunk.loc[(chunk['nis_beneficiario'] == '00000000000'), 'nis_beneficiario'] = ''
chunk.loc[(chunk['nis_responsavel'] == '-2'), 'nis_responsavel'] = ''
chunk['cpf_responsavel'] = chunk['cpf_responsavel'].fillna('')
chunk.loc[(chunk['nome_responsavel'] == 'Não se aplica'), 'nome_responsavel'] = ''
chunk.loc[(chunk['observacao'] == 'Não há'), 'observacao'] = ''
chunk['observacao'] = chunk['observacao'].fillna('')
chunk.sort_values(['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel'], inplace=True)
chunk = chunk[ordem]
chunk.drop(['mes', 'sigla_uf'], axis=1, inplace=True)
auxilio_abril = auxilio_abril.append(chunk)
auxilio_abril.drop_duplicates(subset=['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel', 'nome_beneficiario', 'observacao'], keep ='first', inplace = True)
exec('auxilio_abril.to_csv("/content/gdrive/MyDrive/br_mc_auxilio_emergencial/output/mes=2020-04/sigla_uf={}/microdados.csv",index=False, encoding="utf-8", na_rep="")'.format(uf))
del auxilio_abril
maio = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/input/202005_AuxilioEmergencial.zip'
extract = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/temp/mes=2020-05'
with zipfile.ZipFile(maio, 'r') as zip_ref:
zip_ref.extractall(extract)
for uf in ufs:
auxilio_maio = pd.DataFrame()
ch=0
for chunk in pd.read_csv(extract+'/202005_AuxilioEmergencial.csv',sep=';', encoding='latin-1', dtype='string', chunksize=500000):
ch = ch + 1
print("Fazendo chunk parte {}".format(ch))
chunk.rename(columns=rename, inplace=True)
chunk = chunk[chunk['sigla_uf']==uf]
chunk['mes'] = pd.to_datetime(chunk['mes'], format='%Y%m', errors='coerce').dt.to_period('m')
chunk['valor_beneficio'] = chunk['valor_beneficio'].apply(lambda x: str(x).replace(',','.'))
chunk.loc[(chunk['nis_beneficiario'] == '00000000000'), 'nis_beneficiario'] = ''
chunk.loc[(chunk['nis_responsavel'] == '-2'), 'nis_responsavel'] = ''
chunk['cpf_responsavel'] = chunk['cpf_responsavel'].fillna('')
chunk.loc[(chunk['nome_responsavel'] == 'Não se aplica'), 'nome_responsavel'] = ''
chunk.loc[(chunk['observacao'] == 'Não há'), 'observacao'] = ''
chunk['observacao'] = chunk['observacao'].fillna('')
chunk.sort_values(['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel'], inplace=True)
chunk = chunk[ordem]
chunk.drop(['mes', 'sigla_uf'], axis=1, inplace=True)
auxilio_maio = auxilio_maio.append(chunk)
auxilio_maio.drop_duplicates(subset=['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel', 'nome_beneficiario', 'observacao'], keep ='first', inplace = True)
exec('auxilio_maio.to_csv("/content/gdrive/MyDrive/br_mc_auxilio_emergencial/output/mes=2020-05/sigla_uf={}/microdados.csv",index=False, encoding="utf-8", na_rep="")'.format(uf))
del auxilio_maio
junho = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/input/202006_AuxilioEmergencial.zip'
extract = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/temp/mes=2020-06'
with zipfile.ZipFile(junho, 'r') as zip_ref:
zip_ref.extractall(extract)
for uf in ufs:
auxilio_junho = pd.DataFrame()
ch=0
for chunk in pd.read_csv(extract+'/202006_AuxilioEmergencial.csv',sep=';', encoding='latin-1', dtype='string', chunksize=500000):
ch = ch + 1
print("Fazendo chunk parte {}".format(ch))
chunk.rename(columns=rename, inplace=True)
chunk = chunk[chunk['sigla_uf']==uf]
chunk['mes'] = pd.to_datetime(chunk['mes'], format='%Y%m', errors='coerce').dt.to_period('m')
chunk['valor_beneficio'] = chunk['valor_beneficio'].apply(lambda x: str(x).replace(',','.'))
chunk.loc[(chunk['nis_beneficiario'] == '00000000000'), 'nis_beneficiario'] = ''
chunk.loc[(chunk['nis_responsavel'] == '-2'), 'nis_responsavel'] = ''
chunk['cpf_responsavel'] = chunk['cpf_responsavel'].fillna('')
chunk.loc[(chunk['nome_responsavel'] == 'Não se aplica'), 'nome_responsavel'] = ''
chunk.loc[(chunk['observacao'] == 'Não há'), 'observacao'] = ''
chunk['observacao'] = chunk['observacao'].fillna('')
chunk.sort_values(['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel'], inplace=True)
chunk = chunk[ordem]
chunk.drop(['mes', 'sigla_uf'], axis=1, inplace=True)
auxilio_junho = auxilio_junho.append(chunk)
auxilio_junho.drop_duplicates(subset=['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel', 'nome_beneficiario', 'observacao'], keep ='first', inplace = True)
exec('auxilio_junho.to_csv("/content/gdrive/MyDrive/br_mc_auxilio_emergencial/output/mes=2020-06/sigla_uf={}/microdados.csv",index=False, encoding="utf-8", na_rep="")'.format(uf))
del auxilio_junho
julho = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/input/202007_AuxilioEmergencial.zip'
extract = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/temp/mes=2020-07'
with zipfile.ZipFile(julho, 'r') as zip_ref:
zip_ref.extractall(extract)
for uf in ufs:
auxilio_julho = pd.DataFrame()
ch=0
for chunk in pd.read_csv(extract+'/202007_AuxilioEmergencial.csv',sep=';', encoding='latin-1', dtype='string', chunksize=500000):
ch = ch + 1
print("Fazendo chunk parte {}".format(ch))
chunk.rename(columns=rename, inplace=True)
chunk = chunk[chunk['sigla_uf']==uf]
chunk['mes'] = pd.to_datetime(chunk['mes'], format='%Y%m', errors='coerce').dt.to_period('m')
chunk['valor_beneficio'] = chunk['valor_beneficio'].apply(lambda x: str(x).replace(',','.'))
chunk.loc[(chunk['nis_beneficiario'] == '00000000000'), 'nis_beneficiario'] = ''
chunk.loc[(chunk['nis_responsavel'] == '-2'), 'nis_responsavel'] = ''
chunk['cpf_responsavel'] = chunk['cpf_responsavel'].fillna('')
chunk.loc[(chunk['nome_responsavel'] == 'Não se aplica'), 'nome_responsavel'] = ''
chunk.loc[(chunk['observacao'] == 'Não há'), 'observacao'] = ''
chunk['observacao'] = chunk['observacao'].fillna('')
chunk.sort_values(['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel'], inplace=True)
chunk = chunk[ordem]
chunk.drop(['mes', 'sigla_uf'], axis=1, inplace=True)
auxilio_julho = auxilio_julho.append(chunk)
auxilio_julho.drop_duplicates(subset=['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel', 'nome_beneficiario', 'observacao'], keep ='first', inplace = True)
exec('auxilio_julho.to_csv("/content/gdrive/MyDrive/br_mc_auxilio_emergencial/output/mes=2020-07/sigla_uf={}/microdados.csv",index=False, encoding="utf-8", na_rep="")'.format(uf))
del auxilio_julho
agosto = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/input/202008_AuxilioEmergencial.zip'
extract = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/temp/mes=2020-08'
with zipfile.ZipFile(agosto, 'r') as zip_ref:
zip_ref.extractall(extract)
for uf in ufs:
auxilio_agosto = pd.DataFrame()
ch=0
for chunk in pd.read_csv(extract+'/202008_AuxilioEmergencial.csv',sep=';', encoding='latin-1', dtype='string', chunksize=500000):
ch = ch + 1
print("Fazendo chunk parte {}".format(ch))
chunk.rename(columns=rename, inplace=True)
chunk = chunk[chunk['sigla_uf']==uf]
chunk['mes'] = pd.to_datetime(chunk['mes'], format='%Y%m', errors='coerce').dt.to_period('m')
chunk['valor_beneficio'] = chunk['valor_beneficio'].apply(lambda x: str(x).replace(',','.'))
chunk.loc[(chunk['nis_beneficiario'] == '00000000000'), 'nis_beneficiario'] = ''
chunk.loc[(chunk['nis_responsavel'] == '-2'), 'nis_responsavel'] = ''
chunk['cpf_responsavel'] = chunk['cpf_responsavel'].fillna('')
chunk.loc[(chunk['nome_responsavel'] == 'Não se aplica'), 'nome_responsavel'] = ''
chunk.loc[(chunk['observacao'] == 'Não há'), 'observacao'] = ''
chunk['observacao'] = chunk['observacao'].fillna('')
chunk.sort_values(['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel'], inplace=True)
chunk = chunk[ordem]
chunk.drop(['mes', 'sigla_uf'], axis=1, inplace=True)
auxilio_agosto = auxilio_agosto.append(chunk)
auxilio_agosto.drop_duplicates(subset=['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel', 'nome_beneficiario', 'observacao'], keep ='first', inplace = True)
exec('auxilio_agosto.to_csv("/content/gdrive/MyDrive/br_mc_auxilio_emergencial/output/mes=2020-08/sigla_uf={}/microdados.csv",index=False, encoding="utf-8", na_rep="")'.format(uf))
del auxilio_agosto
setembro = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/input/202009_AuxilioEmergencial.zip'
extract = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/temp/mes=2020-09'
with zipfile.ZipFile(setembro, 'r') as zip_ref:
zip_ref.extractall(extract)
for uf in ufs:
auxilio_setembro = pd.DataFrame()
ch=0
for chunk in pd.read_csv(extract+'/202009_AuxilioEmergencial.csv',sep=';', encoding='latin-1', dtype='string', chunksize=500000):
ch = ch + 1
print("Fazendo chunk parte {}".format(ch))
chunk.rename(columns=rename, inplace=True)
chunk = chunk[chunk['sigla_uf']==uf]
chunk['mes'] = pd.to_datetime(chunk['mes'], format='%Y%m', errors='coerce').dt.to_period('m')
chunk['valor_beneficio'] = chunk['valor_beneficio'].apply(lambda x: str(x).replace(',','.'))
chunk.loc[(chunk['nis_beneficiario'] == '00000000000'), 'nis_beneficiario'] = ''
chunk.loc[(chunk['nis_responsavel'] == '-2'), 'nis_responsavel'] = ''
chunk['cpf_responsavel'] = chunk['cpf_responsavel'].fillna('')
chunk.loc[(chunk['nome_responsavel'] == 'Não se aplica'), 'nome_responsavel'] = ''
chunk.loc[(chunk['observacao'] == 'Não há'), 'observacao'] = ''
chunk['observacao'] = chunk['observacao'].fillna('')
chunk.sort_values(['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel'], inplace=True)
chunk = chunk[ordem]
chunk.drop(['mes', 'sigla_uf'], axis=1, inplace=True)
auxilio_setembro = auxilio_setembro.append(chunk)
auxilio_setembro.drop_duplicates(subset=['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel', 'nome_beneficiario', 'observacao'], keep ='first', inplace = True)
exec('auxilio_setembro.to_csv("/content/gdrive/MyDrive/br_mc_auxilio_emergencial/output/mes=2020-09/sigla_uf={}/microdados.csv",index=False, encoding="utf-8", na_rep="")'.format(uf))
del auxilio_setembro
outubro = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/input/202010_AuxilioEmergencial.zip'
extract = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/temp/mes=2020-10'
with zipfile.ZipFile(outubro, 'r') as zip_ref:
zip_ref.extractall(extract)
for uf in ufs:
auxilio_outubro = pd.DataFrame()
ch=0
for chunk in pd.read_csv(extract+'/202010_AuxilioEmergencial.csv',sep=';', encoding='latin-1', dtype='string', chunksize=500000):
ch = ch + 1
print("Fazendo chunk parte {}".format(ch))
chunk.rename(columns=rename, inplace=True)
chunk = chunk[chunk['sigla_uf']==uf]
chunk['mes'] = pd.to_datetime(chunk['mes'], format='%Y%m', errors='coerce').dt.to_period('m')
chunk['valor_beneficio'] = chunk['valor_beneficio'].apply(lambda x: str(x).replace(',','.'))
chunk.loc[(chunk['nis_beneficiario'] == '00000000000'), 'nis_beneficiario'] = ''
chunk.loc[(chunk['nis_responsavel'] == '-2'), 'nis_responsavel'] = ''
chunk['cpf_responsavel'] = chunk['cpf_responsavel'].fillna('')
chunk.loc[(chunk['nome_responsavel'] == 'Não se aplica'), 'nome_responsavel'] = ''
chunk.loc[(chunk['observacao'] == 'Não há'), 'observacao'] = ''
chunk['observacao'] = chunk['observacao'].fillna('')
chunk.sort_values(['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel'], inplace=True)
chunk = chunk[ordem]
chunk.drop(['mes', 'sigla_uf'], axis=1, inplace=True)
auxilio_outubro = auxilio_outubro.append(chunk)
auxilio_outubro.drop_duplicates(subset=['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel', 'nome_beneficiario', 'observacao'], keep ='first', inplace = True)
exec('auxilio_outubro.to_csv("/content/gdrive/MyDrive/br_mc_auxilio_emergencial/output/mes=2020-10/sigla_uf={}/microdados.csv",index=False, encoding="utf-8", na_rep="")'.format(uf))
del auxilio_outubro
novembro = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/input/202011_AuxilioEmergencial.zip'
extract = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/temp/mes=2020-11'
with zipfile.ZipFile(novembro, 'r') as zip_ref:
zip_ref.extractall(extract)
for uf in ufs:
auxilio_novembro = pd.DataFrame()
ch=0
for chunk in pd.read_csv(extract+'/202011_AuxilioEmergencial.csv',sep=';', encoding='latin-1', dtype='string', chunksize=500000):
ch = ch + 1
print("Fazendo chunk parte {}".format(ch))
chunk.rename(columns=rename, inplace=True)
chunk = chunk[chunk['sigla_uf']==uf]
chunk['mes'] = pd.to_datetime(chunk['mes'], format='%Y%m', errors='coerce').dt.to_period('m')
chunk['valor_beneficio'] = chunk['valor_beneficio'].apply(lambda x: str(x).replace(',','.'))
chunk.loc[(chunk['nis_beneficiario'] == '00000000000'), 'nis_beneficiario'] = ''
chunk.loc[(chunk['nis_responsavel'] == '-2'), 'nis_responsavel'] = ''
chunk['cpf_responsavel'] = chunk['cpf_responsavel'].fillna('')
chunk.loc[(chunk['nome_responsavel'] == 'Não se aplica'), 'nome_responsavel'] = ''
chunk.loc[(chunk['observacao'] == 'Não há'), 'observacao'] = ''
chunk['observacao'] = chunk['observacao'].fillna('')
chunk.sort_values(['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel'], inplace=True)
chunk = chunk[ordem]
chunk.drop(['mes', 'sigla_uf'], axis=1, inplace=True)
auxilio_novembro = auxilio_novembro.append(chunk)
auxilio_novembro.drop_duplicates(subset=['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel', 'nome_beneficiario', 'observacao'], keep ='first', inplace = True)
exec('auxilio_novembro.to_csv("/content/gdrive/MyDrive/br_mc_auxilio_emergencial/output/mes=2020-11/sigla_uf={}/microdados.csv",index=False, encoding="utf-8", na_rep="")'.format(uf))
del auxilio_novembro
dezembro = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/input/202012_AuxilioEmergencial.zip'
extract = '/content/gdrive/MyDrive/br_mc_auxilio_emergencial/temp/mes=2020-12'
with zipfile.ZipFile(dezembro, 'r') as zip_ref:
zip_ref.extractall(extract)
ufs = {'SP'}
for uf in ufs:
auxilio_dezembro = pd.DataFrame()
ch=0
for chunk in pd.read_csv(extract+'/202012_AuxilioEmergencial.csv',sep=';', encoding='latin-1', dtype='string', chunksize=500000):
ch = ch + 1
print("Fazendo chunk parte {}".format(ch))
chunk.rename(columns=rename, inplace=True)
chunk = chunk[chunk['sigla_uf']==uf]
chunk['mes'] = pd.to_datetime(chunk['mes'], format='%Y%m', errors='coerce').dt.to_period('m')
chunk['valor_beneficio'] = chunk['valor_beneficio'].apply(lambda x: str(x).replace(',','.'))
chunk.loc[(chunk['nis_beneficiario'] == '00000000000'), 'nis_beneficiario'] = ''
chunk.loc[(chunk['nis_responsavel'] == '-2'), 'nis_responsavel'] = ''
chunk['cpf_responsavel'] = chunk['cpf_responsavel'].fillna('')
chunk.loc[(chunk['nome_responsavel'] == 'Não se aplica'), 'nome_responsavel'] = ''
chunk.loc[(chunk['observacao'] == 'Não há'), 'observacao'] = ''
chunk['observacao'] = chunk['observacao'].fillna('')
chunk.sort_values(['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel'], inplace=True)
chunk = chunk[ordem]
chunk.drop(['mes', 'sigla_uf'], axis=1, inplace=True)
auxilio_dezembro = auxilio_dezembro.append(chunk)
auxilio_dezembro.drop_duplicates(subset=['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel', 'nome_beneficiario', 'observacao'], keep ='first', inplace = True)
exec('auxilio_dezembro.to_csv("/content/gdrive/MyDrive/br_mc_auxilio_emergencial/output/mes=2020-12/sigla_uf={}/microdados.csv",index=False, encoding="utf-8", na_rep="")'.format(uf))
del auxilio_dezembro
###Output
Fazendo chunk parte 1
Fazendo chunk parte 2
Fazendo chunk parte 3
###Markdown
**Auxílio Emergencial 2021**
###Code
meses = ['2021-01', '2021-02', '2021-03', '2021-04', '2021-05', '2021-06', '2021-07']
file = []
for filepath in glob.iglob(r'/content/gdrive/MyDrive/br_mc_auxilio_emergencial/input/*.csv'):
file.append(filepath)
mes=[]
for i in range(len(file)) : mes.append(int(file[i][56:62]))
dict_mes = dict(zip(mes, file))
dict_mes
for i in mes:
if i >= 202101:
df = pd.DataFrame()
df = pd.read_csv(dict_mes[i], sep=';', dtype='string', encoding='latin-1')
df.rename(columns=rename, inplace=True)
df['valor_beneficio'] = df['valor_beneficio'].apply(lambda x: str(x).replace(',','.'))
df.loc[(df['nis_beneficiario'] == '00000000000'), 'nis_beneficiario'] = ''
df.loc[(df['nis_responsavel'] == '-2'), 'nis_responsavel'] = ''
df['cpf_responsavel'] = df['cpf_responsavel'].fillna('')
df.loc[(df['nome_responsavel'] == 'Não se aplica'), 'nome_responsavel'] = ''
df.loc[(df['observacao'] == 'Não há'), 'observacao'] = ''
df['observacao'] = df['observacao'].fillna('')
df.sort_values(['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel', 'parcela'], inplace=True)
df.drop_duplicates(subset=['nis_beneficiario', 'nis_responsavel', 'cpf_beneficiario', 'cpf_responsavel', 'parcela', 'observacao'], keep ='first', inplace=True)
df = df[ordem]
for uf in ufs:
for m in meses:
exec("df_{} = df[df['sigla_uf']== uf]".format(uf))
exec("df_{}.drop(['sigla_uf', 'mes'], axis=1, inplace=True)".format(uf))
print("Particionando {} de {}".format(uf,m))
exec("df_{}.to_csv('/content/gdrive/MyDrive/br_mc_auxilio_emergencial/output/mes={}/sigla_uf={}/microdados2.csv',index=False, encoding='utf-8', na_rep='')".format(uf,m,uf))
exec('del df_{}'.format(uf))
###Output
_____no_output_____ |
Nyquist.ipynb | ###Markdown
Let's look at a simple FID. It has only one signal and there is no relaxation. In this case, our FID has a signal with an oscillation of 2.0 Hz.
###Code
import numpy as np
import matplotlib.pyplot as plt
# define a standard look for all of the plots
%matplotlib inline
font = {'size' : 20}
plt.rc('font', **font)
DW = 0.005
NP = 512
OMEGA1 = 2.00
OMEGA2 = 2.00
OMEGA3 = 1.50
OMEGA4 = 6.00
AQ = NP * DW
t = np.arange(0.0, NP * DW, DW)
i = np.cos(2 * np.pi * OMEGA2 * t)
fig, ax1 = plt.subplots(figsize=(15,7.5))
ax1.plot(t, i, color='black', linewidth=2)
ax1.set_xlim(-0.05, (AQ + 0.05))
ax1.set_xlabel('time')
ax1.set_ylabel('intensity')
ax1.set_ylim(-1.1, 1.1)
ax1.grid(True)
###Output
_____no_output_____
###Markdown
Ok. This is what we expected. Right? It's a cosine modulated signal with a frequency of 2.0 Hz. I've drawn it as a continuous function but we know that's not right. The computer digitizes this analog signal and turns it into discrete data points: [I(t), t]. How does the computer digitize this signal? I mean how often does it sample points?
###Code
nyquist_t = np.arange(0.0, NP * DW, (1 / (2 * OMEGA1)))
nyquist_i = np.cos(2 * np.pi * OMEGA1 * nyquist_t)
fig, ax2 = plt.subplots(figsize=(15,7.5))
ax2.plot(t, i, color='black', linewidth=2)
ax2.scatter(nyquist_t, nyquist_i, s=200, color='orange')
ax2.set_xlim(-0.05, (AQ + 0.05))
ax2.set_xlabel('time')
ax2.set_ylabel('intensity')
ax2.set_ylim(-1.1, 1.1)
ax2.grid(True)
###Output
_____no_output_____
###Markdown
According to the Nyquist theorem, we need to collect data points at a rate that is two times the fastest oscillation. Our signal osciallates at 2.0 Hz. Therefore, we need to collect four data points per second. We can also say that we need to collect two data points _per cycle_
###Code
t = np.arange(0.0, NP * DW, DW)
i = np.cos(2 * np.pi * OMEGA3 * t)
nyquist_t3 = np.arange(0.0, NP * DW, (1 / (2 * OMEGA1)))
nyquist_i3 = np.cos(2 * np.pi * OMEGA3 * nyquist_t)
fig, ax3 = plt.subplots(figsize=(15,7.5))
ax3.plot(t, i, color='black', linewidth=2)
ax3.scatter(nyquist_t, nyquist_i, s=200, color='orange')
ax3.scatter(nyquist_t3, nyquist_i3, s=200, color='blue')
ax3.set_xlim(-0.05, (AQ + 0.05))
ax3.set_xlabel('time')
ax3.set_ylabel('intensity')
ax3.set_ylim(-1.1, 1.1)
ax3.grid(True)
###Output
_____no_output_____
###Markdown
Wow. Look at that. Here is a 1.5 Hz signal. As the oscillating signal (black line) gets slower, the sampled data points (orange circles) don't match as well. I can resample the new frequency at the same time intervals and it produces a unique sampling scheme for that particular frequency. No _slower_ signal can be confused for the actual signal. If I continue to sample four data points per second, any signal that is 2.0 Hz or slower can be uniquely identified.
###Code
t = np.arange(0.0, NP * DW, DW)
i = np.cos(2 * np.pi * OMEGA4 * t)
nyquist_t4 = np.arange(0.0, NP * DW, (1 / (2 * OMEGA1)))
nyquist_i4 = np.cos(2 * np.pi * OMEGA1 * nyquist_t)
fig, ax4 = plt.subplots(figsize=(15,7.5))
ax4.plot(t, i, color='black', linewidth=2)
ax4.scatter(nyquist_t4, nyquist_i4, s=200, color='orange')
# ax4.scatter(nyquist_t3, nyquist_i3, s=200, color='blue')
ax4.set_xlim(-0.05, (AQ + 0.05))
ax4.set_xlabel('time')
ax4.set_ylabel('intensity')
ax4.set_ylim(-1.1, 1.1)
ax4.grid(True)
###Output
_____no_output_____ |
sagemaker-python-sdk/tensorflow_serving_container/tensorflow_serving_container.ipynb | ###Markdown
Using the SageMaker TensorFlow Serving ContainerThe [SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) makes it easy to deploy trained TensorFlow models to a SageMaker Endpoint without the need for any custom model loading or inference code.In this example, we will show how deploy one or more pre-trained models from [TensorFlow Hub](https://www.tensorflow.org/hub/) to a SageMaker Endpoint using the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk), and then use the model(s) to perform inference requests. SetupFirst, we need to ensure we have an up-to-date version of the SageMaker Python SDK, and install a fewadditional python packages.
###Code
!pip install -U --quiet "sagemaker>=1.14.2,<2"
!pip install -U --quiet opencv-python tensorflow-hub
###Output
_____no_output_____
###Markdown
Next, we'll get the IAM execution role from our notebook environment, so that SageMaker can access resources in your AWS account later in the example.
###Code
from sagemaker import get_execution_role
sagemaker_role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download and prepare a model from TensorFlow HubThe TensorFlow Serving Container works with any model stored in TensorFlow's [SavedModel format](https://www.tensorflow.org/guide/saved_model). This could be the output of your own training job or a model trained elsewhere. For this example, we will use a pre-trained version of the MobileNet V2 image classification model from [TensorFlow Hub](https://tfhub.dev/).The TensorFlow Hub models are pre-trained, but do not include a serving ``signature_def``, so we'll need to load the model into a TensorFlow session, define the input and output layers, and export it as a SavedModel. There is a helper function in this notebook's `sample_utils.py` module that will do that for us.
###Code
import sample_utils
model_name = 'mobilenet_v2_140_224'
export_path = 'mobilenet'
model_path = sample_utils.tfhub_to_savedmodel(model_name, export_path)
print('SavedModel exported to {}'.format(model_path))
###Output
_____no_output_____
###Markdown
After exporting the model, we can inspect it using TensorFlow's ``saved_model_cli`` command. In the command output, you should see ```MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:...```The command output should also show details of the model inputs and outputs.
###Code
!saved_model_cli show --all --dir {model_path}
###Output
_____no_output_____
###Markdown
Optional: add a second modelThe TensorFlow Serving container can host multiple models, if they are packaged in the same model archive file. Let's prepare a second version of the MobileNet model so we can demonstrate this. The `mobilenet_v2_035_224` model is a shallower version of MobileNetV2 that trades accuracy for smaller model size and faster computation, but has the same inputs and outputs.
###Code
second_model_name = 'mobilenet_v2_035_224'
second_model_path = sample_utils.tfhub_to_savedmodel(second_model_name, export_path)
print('SavedModel exported to {}'.format(second_model_path))
###Output
_____no_output_____
###Markdown
Next we need to create a model archive file containing the exported model. Create a model archive fileSageMaker models need to be packaged in `.tar.gz` files. When your endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the endpoint.
###Code
!tar -C "$PWD" -czf mobilenet.tar.gz mobilenet/
###Output
_____no_output_____
###Markdown
Upload the model archive file to S3We now have a suitable model archive ready in our notebook. We need to upload it to S3 before we can create a SageMaker Model that. We'll use the SageMaker Python SDK to handle the upload.
###Code
from sagemaker.session import Session
model_data = Session().upload_data(path='mobilenet.tar.gz', key_prefix='model')
print('model uploaded to: {}'.format(model_data))
###Output
_____no_output_____
###Markdown
Create a SageMaker Model and EndpointNow that the model archive is in S3, we can create a Model and deploy it to an Endpoint with a few lines of python code:
###Code
from sagemaker.tensorflow.serving import Model
# Use an env argument to set the name of the default model.
# This is optional, but recommended when you deploy multiple models
# so that requests that don't include a model name are sent to a
# predictable model.
env = {'SAGEMAKER_TFS_DEFAULT_MODEL_NAME': 'mobilenet_v2_140_224'}
model = Model(model_data=model_data, role=sagemaker_role, framework_version='1.15.2', env=env)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
###Output
_____no_output_____
###Markdown
Make predictions using the endpointThe endpoint is now up and running, and ready to handle inference requests. The `deploy` call above returned a `predictor` object. The `predict` method of this object handles sending requests to the endpoint. It also automatically handles JSON serialization of our input arguments, and JSON deserialization of the prediction results.We'll use these sample images:
###Code
# read the image files into a tensor (numpy array)
kitten_image = sample_utils.image_file_to_tensor('kitten.jpg')
# get a prediction from the endpoint
# the image input is automatically converted to a JSON request.
# the JSON response from the endpoint is returned as a python dict
result = predictor.predict(kitten_image)
# show the raw result
print(result)
###Output
_____no_output_____
###Markdown
Add class labels and show formatted resultsThe `sample_utils` module includes functions that can add Imagenet class labels to our results and print formatted output. Let's use them to get a better sense of how well our model worked on the input image.
###Code
# add class labels to the predicted result
sample_utils.add_imagenet_labels(result)
# show the probabilities and labels for the top predictions
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Optional: make predictions using the second modelIf you added the second model (`mobilenet_v2_035_224`) in the previous optional step, then you can also send prediction requests to that model. To do that, we'll need to create a new `predictor` object.Note: if you are using local mode (by changing the instance type to `local` or `local_gpu`), you'll need to create the new predictor this way instead:```predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224', sagemaker_session=predictor.sagemaker_session)```
###Code
from sagemaker.tensorflow.serving import Predictor
# use values from the default predictor to set up the new one
predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224')
# make a new prediction
bee_image = sample_utils.image_file_to_tensor('bee.jpg')
result = predictor2.predict(bee_image)
# show the formatted result
sample_utils.add_imagenet_labels(result)
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Additional InformationThe TensorFlow Serving Container supports additional features not covered in this notebook, including support for:- TensorFlow Serving REST API requests, including classify and regress requests- CSV input- Other JSON formatsFor information on how to use these features, refer to the documentation in the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Cleaning upTo avoid incurring charges to your AWS account for the resources used in this tutorial, you need to delete the SageMaker Endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Using the SageMaker TensorFlow Serving ContainerThe [SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) makes it easy to deploy trained TensorFlow models to a SageMaker Endpoint without the need for any custom model loading or inference code.In this example, we will show how deploy one or more pre-trained models from [TensorFlow Hub](https://www.tensorflow.org/hub/) to a SageMaker Endpoint using the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk), and then use the model(s) to perform inference requests. SetupFirst, we need to ensure we have an up-to-date version of the SageMaker Python SDK, and install a fewadditional python packages.
###Code
!pip install -U --quiet "sagemaker>=1.14.2,<2"
!pip install -U --quiet opencv-python tensorflow-hub
###Output
_____no_output_____
###Markdown
Next, we'll get the IAM execution role from our notebook environment, so that SageMaker can access resources in your AWS account later in the example.
###Code
from sagemaker import get_execution_role
sagemaker_role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download and prepare a model from TensorFlow HubThe TensorFlow Serving Container works with any model stored in TensorFlow's [SavedModel format](https://www.tensorflow.org/guide/saved_model). This could be the output of your own training job or a model trained elsewhere. For this example, we will use a pre-trained version of the MobileNet V2 image classification model from [TensorFlow Hub](https://tfhub.dev/).The TensorFlow Hub models are pre-trained, but do not include a serving ``signature_def``, so we'll need to load the model into a TensorFlow session, define the input and output layers, and export it as a SavedModel. There is a helper function in this notebook's `sample_utils.py` module that will do that for us.
###Code
import sample_utils
model_name = 'mobilenet_v2_140_224'
export_path = 'mobilenet'
model_path = sample_utils.tfhub_to_savedmodel(model_name, export_path)
print('SavedModel exported to {}'.format(model_path))
###Output
_____no_output_____
###Markdown
After exporting the model, we can inspect it using TensorFlow's ``saved_model_cli`` command. In the command output, you should see ```MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:...```The command output should also show details of the model inputs and outputs.
###Code
!saved_model_cli show --all --dir {model_path}
###Output
_____no_output_____
###Markdown
Optional: add a second modelThe TensorFlow Serving container can host multiple models, if they are packaged in the same model archive file. Let's prepare a second version of the MobileNet model so we can demonstrate this. The `mobilenet_v2_035_224` model is a shallower version of MobileNetV2 that trades accuracy for smaller model size and faster computation, but has the same inputs and outputs.
###Code
second_model_name = 'mobilenet_v2_035_224'
second_model_path = sample_utils.tfhub_to_savedmodel(second_model_name, export_path)
print('SavedModel exported to {}'.format(second_model_path))
###Output
_____no_output_____
###Markdown
Next we need to create a model archive file containing the exported model. Create a model archive fileSageMaker models need to be packaged in `.tar.gz` files. When your endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the endpoint.
###Code
!tar -C "$PWD" -czf mobilenet.tar.gz mobilenet/
###Output
_____no_output_____
###Markdown
Upload the model archive file to S3We now have a suitable model archive ready in our notebook. We need to upload it to S3 before we can create a SageMaker Model that. We'll use the SageMaker Python SDK to handle the upload.
###Code
from sagemaker.session import Session
model_data = Session().upload_data(path='mobilenet.tar.gz', key_prefix='model')
print('model uploaded to: {}'.format(model_data))
###Output
_____no_output_____
###Markdown
Create a SageMaker Model and EndpointNow that the model archive is in S3, we can create a Model and deploy it to an Endpoint with a few lines of python code:
###Code
from sagemaker.tensorflow.serving import Model
# Use an env argument to set the name of the default model.
# This is optional, but recommended when you deploy multiple models
# so that requests that don't include a model name are sent to a
# predictable model.
env = {'SAGEMAKER_TFS_DEFAULT_MODEL_NAME': 'mobilenet_v2_140_224'}
model = Model(model_data=model_data, role=sagemaker_role, framework_version='1.15.2', env=env)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
###Output
_____no_output_____
###Markdown
Make predictions using the endpointThe endpoint is now up and running, and ready to handle inference requests. The `deploy` call above returned a `predictor` object. The `predict` method of this object handles sending requests to the endpoint. It also automatically handles JSON serialization of our input arguments, and JSON deserialization of the prediction results.We'll use these sample images:
###Code
# read the image files into a tensor (numpy array)
kitten_image = sample_utils.image_file_to_tensor('kitten.jpg')
# get a prediction from the endpoint
# the image input is automatically converted to a JSON request.
# the JSON response from the endpoint is returned as a python dict
result = predictor.predict(kitten_image)
# show the raw result
print(result)
###Output
_____no_output_____
###Markdown
Add class labels and show formatted resultsThe `sample_utils` module includes functions that can add Imagenet class labels to our results and print formatted output. Let's use them to get a better sense of how well our model worked on the input image.
###Code
# add class labels to the predicted result
sample_utils.add_imagenet_labels(result)
# show the probabilities and labels for the top predictions
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Optional: make predictions using the second modelIf you added the second model (`mobilenet_v2_035_224`) in the previous optional step, then you can also send prediction requests to that model. To do that, we'll need to create a new `predictor` object.Note: if you are using local mode (by changing the instance type to `local` or `local_gpu`), you'll need to create the new predictor this way instead:```predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224', sagemaker_session=predictor.sagemaker_session)```
###Code
from sagemaker.tensorflow.serving import Predictor
# use values from the default predictor to set up the new one
predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224')
# make a new prediction
bee_image = sample_utils.image_file_to_tensor('bee.jpg')
result = predictor2.predict(bee_image)
# show the formatted result
sample_utils.add_imagenet_labels(result)
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Additional InformationThe TensorFlow Serving Container supports additional features not covered in this notebook, including support for:- TensorFlow Serving REST API requests, including classify and regress requests- CSV input- Other JSON formatsFor information on how to use these features, refer to the documentation in the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Cleaning upTo avoid incurring charges to your AWS account for the resources used in this tutorial, you need to delete the SageMaker Endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Using the SageMaker TensorFlow Serving ContainerThe [SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) makes it easy to deploy trained TensorFlow models to a SageMaker Endpoint without the need for any custom model loading or inference code.In this example, we will show how deploy one or more pre-trained models from [TensorFlow Hub](https://www.tensorflow.org/hub/) to a SageMaker Endpoint using the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk), and then use the model(s) to perform inference requests. SetupFirst, we need to ensure we have an up-to-date version of the SageMaker Python SDK, and install a fewadditional python packages.
###Code
!pip install -U --quiet "sagemaker>=1.14.2,<2"
!pip install -U --quiet opencv-python tensorflow-hub
###Output
_____no_output_____
###Markdown
Next, we'll get the IAM execution role from our notebook environment, so that SageMaker can access resources in your AWS account later in the example.
###Code
from sagemaker import get_execution_role
sagemaker_role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download and prepare a model from TensorFlow HubThe TensorFlow Serving Container works with any model stored in TensorFlow's [SavedModel format](https://www.tensorflow.org/guide/saved_model). This could be the output of your own training job or a model trained elsewhere. For this example, we will use a pre-trained version of the MobileNet V2 image classification model from [TensorFlow Hub](https://tfhub.dev/).The TensorFlow Hub models are pre-trained, but do not include a serving ``signature_def``, so we'll need to load the model into a TensorFlow session, define the input and output layers, and export it as a SavedModel. There is a helper function in this notebook's `sample_utils.py` module that will do that for us.
###Code
import sample_utils
model_name = "mobilenet_v2_140_224"
export_path = "mobilenet"
model_path = sample_utils.tfhub_to_savedmodel(model_name, export_path)
print("SavedModel exported to {}".format(model_path))
###Output
_____no_output_____
###Markdown
After exporting the model, we can inspect it using TensorFlow's ``saved_model_cli`` command. In the command output, you should see ```MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:...```The command output should also show details of the model inputs and outputs.
###Code
!saved_model_cli show --all --dir {model_path}
###Output
_____no_output_____
###Markdown
Optional: add a second modelThe TensorFlow Serving container can host multiple models, if they are packaged in the same model archive file. Let's prepare a second version of the MobileNet model so we can demonstrate this. The `mobilenet_v2_035_224` model is a shallower version of MobileNetV2 that trades accuracy for smaller model size and faster computation, but has the same inputs and outputs.
###Code
second_model_name = "mobilenet_v2_035_224"
second_model_path = sample_utils.tfhub_to_savedmodel(second_model_name, export_path)
print("SavedModel exported to {}".format(second_model_path))
###Output
_____no_output_____
###Markdown
Next we need to create a model archive file containing the exported model. Create a model archive fileSageMaker models need to be packaged in `.tar.gz` files. When your endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the endpoint.
###Code
!tar -C "$PWD" -czf mobilenet.tar.gz mobilenet/
###Output
_____no_output_____
###Markdown
Upload the model archive file to S3We now have a suitable model archive ready in our notebook. We need to upload it to S3 before we can create a SageMaker Model that. We'll use the SageMaker Python SDK to handle the upload.
###Code
from sagemaker.session import Session
model_data = Session().upload_data(path="mobilenet.tar.gz", key_prefix="model")
print("model uploaded to: {}".format(model_data))
###Output
_____no_output_____
###Markdown
Create a SageMaker Model and EndpointNow that the model archive is in S3, we can create a Model and deploy it to an Endpoint with a few lines of python code:
###Code
from sagemaker.tensorflow.serving import Model
# Use an env argument to set the name of the default model.
# This is optional, but recommended when you deploy multiple models
# so that requests that don't include a model name are sent to a
# predictable model.
env = {"SAGEMAKER_TFS_DEFAULT_MODEL_NAME": "mobilenet_v2_140_224"}
model = Model(model_data=model_data, role=sagemaker_role, framework_version="1.15.2", env=env)
predictor = model.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")
###Output
_____no_output_____
###Markdown
Make predictions using the endpointThe endpoint is now up and running, and ready to handle inference requests. The `deploy` call above returned a `predictor` object. The `predict` method of this object handles sending requests to the endpoint. It also automatically handles JSON serialization of our input arguments, and JSON deserialization of the prediction results.We'll use these sample images:
###Code
# read the image files into a tensor (numpy array)
kitten_image = sample_utils.image_file_to_tensor("kitten.jpg")
# get a prediction from the endpoint
# the image input is automatically converted to a JSON request.
# the JSON response from the endpoint is returned as a python dict
result = predictor.predict(kitten_image)
# show the raw result
print(result)
###Output
_____no_output_____
###Markdown
Add class labels and show formatted resultsThe `sample_utils` module includes functions that can add Imagenet class labels to our results and print formatted output. Let's use them to get a better sense of how well our model worked on the input image.
###Code
# add class labels to the predicted result
sample_utils.add_imagenet_labels(result)
# show the probabilities and labels for the top predictions
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Optional: make predictions using the second modelIf you added the second model (`mobilenet_v2_035_224`) in the previous optional step, then you can also send prediction requests to that model. To do that, we'll need to create a new `predictor` object.Note: if you are using local mode (by changing the instance type to `local` or `local_gpu`), you'll need to create the new predictor this way instead:```predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224', sagemaker_session=predictor.sagemaker_session)```
###Code
from sagemaker.tensorflow.serving import Predictor
# use values from the default predictor to set up the new one
predictor2 = Predictor(predictor.endpoint, model_name="mobilenet_v2_035_224")
# make a new prediction
bee_image = sample_utils.image_file_to_tensor("bee.jpg")
result = predictor2.predict(bee_image)
# show the formatted result
sample_utils.add_imagenet_labels(result)
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Additional InformationThe TensorFlow Serving Container supports additional features not covered in this notebook, including support for:- TensorFlow Serving REST API requests, including classify and regress requests- CSV input- Other JSON formatsFor information on how to use these features, refer to the documentation in the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Cleaning upTo avoid incurring charges to your AWS account for the resources used in this tutorial, you need to delete the SageMaker Endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Using the SageMaker TensorFlow Serving ContainerThe [SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) makes it easy to deploy trained TensorFlow models to a SageMaker Endpoint without the need for any custom model loading or inference code.In this example, we will show how deploy one or more pre-trained models from [TensorFlow Hub](https://www.tensorflow.org/hub/) to a SageMaker Endpoint using the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk), and then use the model(s) to perform inference requests. SetupFirst, we need to ensure we have an up-to-date version of the SageMaker Python SDK, and install a fewadditional python packages.
###Code
!pip install -U --quiet "sagemaker>=1.14.2,<2"
!pip install -U --quiet opencv-python tensorflow-hub
###Output
_____no_output_____
###Markdown
Next, we'll get the IAM execution role from our notebook environment, so that SageMaker can access resources in your AWS account later in the example.
###Code
from sagemaker import get_execution_role
sagemaker_role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download and prepare a model from TensorFlow HubThe TensorFlow Serving Container works with any model stored in TensorFlow's [SavedModel format](https://www.tensorflow.org/guide/saved_model). This could be the output of your own training job or a model trained elsewhere. For this example, we will use a pre-trained version of the MobileNet V2 image classification model from [TensorFlow Hub](https://tfhub.dev/).The TensorFlow Hub models are pre-trained, but do not include a serving ``signature_def``, so we'll need to load the model into a TensorFlow session, define the input and output layers, and export it as a SavedModel. There is a helper function in this notebook's `sample_utils.py` module that will do that for us.
###Code
import sample_utils
model_name = 'mobilenet_v2_140_224'
export_path = 'mobilenet'
model_path = sample_utils.tfhub_to_savedmodel(model_name, export_path)
print('SavedModel exported to {}'.format(model_path))
###Output
_____no_output_____
###Markdown
After exporting the model, we can inspect it using TensorFlow's ``saved_model_cli`` command. In the command output, you should see ```MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:...```The command output should also show details of the model inputs and outputs.
###Code
!saved_model_cli show --all --dir {model_path}
###Output
_____no_output_____
###Markdown
Optional: add a second modelThe TensorFlow Serving container can host multiple models, if they are packaged in the same model archive file. Let's prepare a second version of the MobileNet model so we can demonstrate this. The `mobilenet_v2_035_224` model is a shallower version of MobileNetV2 that trades accuracy for smaller model size and faster computation, but has the same inputs and outputs.
###Code
second_model_name = 'mobilenet_v2_035_224'
second_model_path = sample_utils.tfhub_to_savedmodel(second_model_name, export_path)
print('SavedModel exported to {}'.format(second_model_path))
###Output
_____no_output_____
###Markdown
Next we need to create a model archive file containing the exported model. Create a model archive fileSageMaker models need to be packaged in `.tar.gz` files. When your endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the endpoint.
###Code
!tar -C "$PWD" -czf mobilenet.tar.gz mobilenet/
###Output
_____no_output_____
###Markdown
Upload the model archive file to S3We now have a suitable model archive ready in our notebook. We need to upload it to S3 before we can create a SageMaker Model that. We'll use the SageMaker Python SDK to handle the upload.
###Code
from sagemaker.session import Session
model_data = Session().upload_data(path='mobilenet.tar.gz', key_prefix='model')
print('model uploaded to: {}'.format(model_data))
###Output
_____no_output_____
###Markdown
Create a SageMaker Model and EndpointNow that the model archive is in S3, we can create a Model and deploy it to an Endpoint with a few lines of python code:
###Code
from sagemaker.tensorflow.serving import Model
# Use an env argument to set the name of the default model.
# This is optional, but recommended when you deploy multiple models
# so that requests that don't include a model name are sent to a
# predictable model.
env = {'SAGEMAKER_TFS_DEFAULT_MODEL_NAME': 'mobilenet_v2_140_224'}
model = Model(model_data=model_data, role=sagemaker_role, framework_version='1.15.2', env=env)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
###Output
_____no_output_____
###Markdown
Make predictions using the endpointThe endpoint is now up and running, and ready to handle inference requests. The `deploy` call above returned a `predictor` object. The `predict` method of this object handles sending requests to the endpoint. It also automatically handles JSON serialization of our input arguments, and JSON deserialization of the prediction results.We'll use these sample images:
###Code
# read the image files into a tensor (numpy array)
kitten_image = sample_utils.image_file_to_tensor('kitten.jpg')
# get a prediction from the endpoint
# the image input is automatically converted to a JSON request.
# the JSON response from the endpoint is returned as a python dict
result = predictor.predict(kitten_image)
# show the raw result
print(result)
###Output
_____no_output_____
###Markdown
Add class labels and show formatted resultsThe `sample_utils` module includes functions that can add Imagenet class labels to our results and print formatted output. Let's use them to get a better sense of how well our model worked on the input image.
###Code
# add class labels to the predicted result
sample_utils.add_imagenet_labels(result)
# show the probabilities and labels for the top predictions
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Optional: make predictions using the second modelIf you added the second model (`mobilenet_v2_035_224`) in the previous optional step, then you can also send prediction requests to that model. To do that, we'll need to create a new `predictor` object.Note: if you are using local mode (by changing the instance type to `local` or `local_gpu`), you'll need to create the new predictor this way instead:```predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224', sagemaker_session=predictor.sagemaker_session)```
###Code
from sagemaker.tensorflow.serving import Predictor
# use values from the default predictor to set up the new one
predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224')
# make a new prediction
bee_image = sample_utils.image_file_to_tensor('bee.jpg')
result = predictor2.predict(bee_image)
# show the formatted result
sample_utils.add_imagenet_labels(result)
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Additional InformationThe TensorFlow Serving Container supports additional features not covered in this notebook, including support for:- TensorFlow Serving REST API requests, including classify and regress requests- CSV input- Other JSON formatsFor information on how to use these features, refer to the documentation in the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Cleaning upTo avoid incurring charges to your AWS account for the resources used in this tutorial, you need to delete the SageMaker Endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Using the SageMaker TensorFlow Serving ContainerThe [SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) makes it easy to deploy trained TensorFlow models to a SageMaker Endpoint without the need for any custom model loading or inference code.In this example, we will show how deploy one or more pre-trained models from [TensorFlow Hub](https://www.tensorflow.org/hub/) to a SageMaker Endpoint using the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk), and then use the model(s) to perform inference requests. Next, we'll get the IAM execution role from our notebook environment, so that SageMaker can access resources in your AWS account later in the example.
###Code
from sagemaker import get_execution_role
sagemaker_role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download and prepare a model from TensorFlow HubThe TensorFlow Serving Container works with any model stored in TensorFlow's [SavedModel format](https://www.tensorflow.org/guide/saved_model). This could be the output of your own training job or a model trained elsewhere. For this example, we will use a pre-trained version of the MobileNet V2 image classification model from [TensorFlow Hub](https://tfhub.dev/).The TensorFlow Hub models are pre-trained, but do not include a serving ``signature_def``, so we'll need to load the model into a TensorFlow session, define the input and output layers, and export it as a SavedModel. There is a helper function in this notebook's `sample_utils.py` module that will do that for us.
###Code
import sample_utils
model_name = "mobilenet_v2_140_224"
export_path = "mobilenet"
model_path = sample_utils.tfhub_to_savedmodel(model_name, export_path)
print("SavedModel exported to {}".format(model_path))
###Output
_____no_output_____
###Markdown
After exporting the model, we can inspect it using TensorFlow's ``saved_model_cli`` command. In the command output, you should see ```MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:...```The command output should also show details of the model inputs and outputs.
###Code
!saved_model_cli show --all --dir {model_path}
###Output
_____no_output_____
###Markdown
Optional: add a second modelThe TensorFlow Serving container can host multiple models, if they are packaged in the same model archive file. Let's prepare a second version of the MobileNet model so we can demonstrate this. The `mobilenet_v2_035_224` model is a shallower version of MobileNetV2 that trades accuracy for smaller model size and faster computation, but has the same inputs and outputs.
###Code
second_model_name = "mobilenet_v2_035_224"
second_model_path = sample_utils.tfhub_to_savedmodel(second_model_name, export_path)
print("SavedModel exported to {}".format(second_model_path))
###Output
_____no_output_____
###Markdown
Next we need to create a model archive file containing the exported model. Create a model archive fileSageMaker models need to be packaged in `.tar.gz` files. When your endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the endpoint.
###Code
!tar -C "$PWD" -czf mobilenet.tar.gz mobilenet/
###Output
_____no_output_____
###Markdown
Upload the model archive file to S3We now have a suitable model archive ready in our notebook. We need to upload it to S3 before we can create a SageMaker Model that. We'll use the SageMaker Python SDK to handle the upload.
###Code
from sagemaker.session import Session
model_data = Session().upload_data(path="mobilenet.tar.gz", key_prefix="model")
print("model uploaded to: {}".format(model_data))
###Output
_____no_output_____
###Markdown
Create a SageMaker Model and EndpointNow that the model archive is in S3, we can create a Model and deploy it to an Endpoint with a few lines of python code:
###Code
from sagemaker.tensorflow.model import TensorFlowModel
# Use an env argument to set the name of the default model.
# This is optional, but recommended when you deploy multiple models
# so that requests that don't include a model name are sent to a
# predictable model.
env = {"SAGEMAKER_TFS_DEFAULT_MODEL_NAME": "mobilenet_v2_140_224"}
model = TensorFlowModel(model_data=model_data, role=sagemaker_role, framework_version="1.15.2", env=env)
predictor = model.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")
###Output
_____no_output_____
###Markdown
Make predictions using the endpointThe endpoint is now up and running, and ready to handle inference requests. The `deploy` call above returned a `predictor` object. The `predict` method of this object handles sending requests to the endpoint. It also automatically handles JSON serialization of our input arguments, and JSON deserialization of the prediction results.We'll use these sample images:
###Code
# read the image files into a tensor (numpy array)
kitten_image = sample_utils.image_file_to_tensor("kitten.jpg")
# get a prediction from the endpoint
# the image input is automatically converted to a JSON request.
# the JSON response from the endpoint is returned as a python dict
result = predictor.predict(kitten_image)
# show the raw result
print(result)
###Output
_____no_output_____
###Markdown
Add class labels and show formatted resultsThe `sample_utils` module includes functions that can add Imagenet class labels to our results and print formatted output. Let's use them to get a better sense of how well our model worked on the input image.
###Code
# add class labels to the predicted result
sample_utils.add_imagenet_labels(result)
# show the probabilities and labels for the top predictions
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Optional: make predictions using the second modelIf you added the second model (`mobilenet_v2_035_224`) in the previous optional step, then you can also send prediction requests to that model. To do that, we'll need to create a new `predictor` object.Note: if you are using local mode (by changing the instance type to `local` or `local_gpu`), you'll need to create the new predictor this way instead:```predictor2 = TensorFlowPredictor(predictor.endpoint_name, model_name='mobilenet_v2_035_224', sagemaker_session=predictor.sagemaker_session)```
###Code
from sagemaker.tensorflow.model import TensorFlowPredictor
# use values from the default predictor to set up the new one
predictor2 = TensorFlowPredictor(predictor.endpoint_name, model_name="mobilenet_v2_035_224")
# make a new prediction
bee_image = sample_utils.image_file_to_tensor("bee.jpg")
result = predictor2.predict(bee_image)
# show the formatted result
sample_utils.add_imagenet_labels(result)
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Additional InformationThe TensorFlow Serving Container supports additional features not covered in this notebook, including support for:- TensorFlow Serving REST API requests, including classify and regress requests- CSV input- Other JSON formatsFor information on how to use these features, refer to the documentation in the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Cleaning upTo avoid incurring charges to your AWS account for the resources used in this tutorial, you need to delete the SageMaker Endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Using the SageMaker TensorFlow Serving ContainerThe [SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) makes it easy to deploy trained TensorFlow models to a SageMaker Endpoint without the need for any custom model loading or inference code.In this example, we will show how deploy one or more pre-trained models from [TensorFlow Hub](https://www.tensorflow.org/hub/) to a SageMaker Endpoint using the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk), and then use the model(s) to perform inference requests. SetupFirst, we need to ensure we have an up-to-date version of the SageMaker Python SDK, and install a fewadditional python packages.
###Code
!pip install -U --quiet "sagemaker>=1.14.2"
!pip install -U --quiet opencv-python tensorflow-hub
###Output
_____no_output_____
###Markdown
Next, we'll get the IAM execution role from our notebook environment, so that SageMaker can access resources in your AWS account later in the example.
###Code
from sagemaker import get_execution_role
sagemaker_role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download and prepare a model from TensorFlow HubThe TensorFlow Serving Container works with any model stored in TensorFlow's [SavedModel format](https://www.tensorflow.org/guide/saved_model). This could be the output of your own training job or a model trained elsewhere. For this example, we will use a pre-trained version of the MobileNet V2 image classification model from [TensorFlow Hub](https://tfhub.dev/).The TensorFlow Hub models are pre-trained, but do not include a serving ``signature_def``, so we'll need to load the model into a TensorFlow session, define the input and output layers, and export it as a SavedModel. There is a helper function in this notebook's `sample_utils.py` module that will do that for us.
###Code
import sample_utils
model_name = 'mobilenet_v2_140_224'
export_path = 'mobilenet'
model_path = sample_utils.tfhub_to_savedmodel(model_name, export_path)
print('SavedModel exported to {}'.format(model_path))
###Output
_____no_output_____
###Markdown
After exporting the model, we can inspect it using TensorFlow's ``saved_model_cli`` command. In the command output, you should see ```MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:...```The command output should also show details of the model inputs and outputs.
###Code
!saved_model_cli show --all --dir {model_path}
###Output
_____no_output_____
###Markdown
Optional: add a second modelThe TensorFlow Serving container can host multiple models, if they are packaged in the same model archive file. Let's prepare a second version of the MobileNet model so we can demonstrate this. The `mobilenet_v2_035_224` model is a shallower version of MobileNetV2 that trades accuracy for smaller model size and faster computation, but has the same inputs and outputs.
###Code
second_model_name = 'mobilenet_v2_035_224'
second_model_path = sample_utils.tfhub_to_savedmodel(second_model_name, export_path)
print('SavedModel exported to {}'.format(second_model_path))
###Output
_____no_output_____
###Markdown
Next we need to create a model archive file containing the exported model. Create a model archive fileSageMaker models need to be packaged in `.tar.gz` files. When your endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the endpoint.
###Code
!tar -C "$PWD" -czf mobilenet.tar.gz mobilenet/
###Output
_____no_output_____
###Markdown
Upload the model archive file to S3We now have a suitable model archive ready in our notebook. We need to upload it to S3 before we can create a SageMaker Model that. We'll use the SageMaker Python SDK to handle the upload.
###Code
from sagemaker.session import Session
model_data = Session().upload_data(path='mobilenet.tar.gz', key_prefix='model')
print('model uploaded to: {}'.format(model_data))
###Output
_____no_output_____
###Markdown
Create a SageMaker Model and EndpointNow that the model archive is in S3, we can create a Model and deploy it to an Endpoint with a few lines of python code:
###Code
from sagemaker.tensorflow.serving import Model
# Use an env argument to set the name of the default model.
# This is optional, but recommended when you deploy multiple models
# so that requests that don't include a model name are sent to a
# predictable model.
env = {'SAGEMAKER_TFS_DEFAULT_MODEL_NAME': 'mobilenet_v2_140_224'}
model = Model(model_data=model_data, role=sagemaker_role, framework_version='1.13', env=env)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
###Output
_____no_output_____
###Markdown
Make predictions using the endpointThe endpoint is now up and running, and ready to handle inference requests. The `deploy` call above returned a `predictor` object. The `predict` method of this object handles sending requests to the endpoint. It also automatically handles JSON serialization of our input arguments, and JSON deserialization of the prediction results.We'll use these sample images:
###Code
# read the image files into a tensor (numpy array)
kitten_image = sample_utils.image_file_to_tensor('kitten.jpg')
# get a prediction from the endpoint
# the image input is automatically converted to a JSON request.
# the JSON response from the endpoint is returned as a python dict
result = predictor.predict(kitten_image)
# show the raw result
print(result)
###Output
_____no_output_____
###Markdown
Add class labels and show formatted resultsThe `sample_utils` module includes functions that can add Imagenet class labels to our results and print formatted output. Let's use them to get a better sense of how well our model worked on the input image.
###Code
# add class labels to the predicted result
sample_utils.add_imagenet_labels(result)
# show the probabilities and labels for the top predictions
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Optional: make predictions using the second modelIf you added the second model (`mobilenet_v2_035_224`) in the previous optional step, then you can also send prediction requests to that model. To do that, we'll need to create a new `predictor` object.Note: if you are using local mode (by changing the instance type to `local` or `local_gpu`), you'll need to create the new predictor this way instead:```predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224', sagemaker_session=predictor.sagemaker_session)```
###Code
from sagemaker.tensorflow.serving import Predictor
# use values from the default predictor to set up the new one
predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224')
# make a new prediction
bee_image = sample_utils.image_file_to_tensor('bee.jpg')
result = predictor2.predict(bee_image)
# show the formatted result
sample_utils.add_imagenet_labels(result)
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Additional InformationThe TensorFlow Serving Container supports additional features not covered in this notebook, including support for:- TensorFlow Serving REST API requests, including classify and regress requests- CSV input- Other JSON formatsFor information on how to use these features, refer to the documentation in the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Cleaning upTo avoid incurring charges to your AWS account for the resources used in this tutorial, you need to delete the SageMaker Endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Using the SageMaker TensorFlow Serving ContainerThe [SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) makes it easy to deploy trained TensorFlow models to a SageMaker Endpoint without the need for any custom model loading or inference code.In this example, we will show how deploy one or more pre-trained models from [TensorFlow Hub](https://www.tensorflow.org/hub/) to a SageMaker Endpoint using the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk), and then use the model(s) to perform inference requests. SetupFirst, we need to ensure we have an up-to-date version of the SageMaker Python SDK, and install a fewadditional python packages.
###Code
!pip install -U --quiet "sagemaker>=1.14.2"
!pip install -U --quiet opencv-python tensorflow-hub
###Output
_____no_output_____
###Markdown
Next, we'll get the IAM execution role from our notebook environment, so that SageMaker can access resources in your AWS account later in the example.
###Code
from sagemaker import get_execution_role
sagemaker_role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download and prepare a model from TensorFlow HubThe TensorFlow Serving Container works with any model stored in TensorFlow's [SavedModel format](https://www.tensorflow.org/guide/saved_model). This could be the output of your own training job or a model trained elsewhere. For this example, we will use a pre-trained version of the MobileNet V2 image classification model from [TensorFlow Hub](https://tfhub.dev/).The TensorFlow Hub models are pre-trained, but do not include a serving ``signature_def``, so we'll need to load the model into a TensorFlow session, define the input and output layers, and export it as a SavedModel. There is a helper function in this notebook's `sample_utils.py` module that will do that for us.
###Code
import sample_utils
model_name = 'mobilenet_v2_140_224'
export_path = 'mobilenet'
model_path = sample_utils.tfhub_to_savedmodel(model_name, export_path)
print('SavedModel exported to {}'.format(model_path))
###Output
_____no_output_____
###Markdown
After exporting the model, we can inspect it using TensorFlow's ``saved_model_cli`` command. In the command output, you should see ```MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:...```The command output should also show details of the model inputs and outputs.
###Code
!saved_model_cli show --all --dir {model_path}
###Output
_____no_output_____
###Markdown
Optional: add a second modelThe TensorFlow Serving container can host multiple models, if they are packaged in the same model archive file. Let's prepare a second version of the MobileNet model so we can demonstrate this. The `mobilenet_v2_035_224` model is a shallower version of MobileNetV2 that trades accuracy for smaller model size and faster computation, but has the same inputs and outputs.
###Code
second_model_name = 'mobilenet_v2_035_224'
second_model_path = sample_utils.tfhub_to_savedmodel(second_model_name, export_path)
print('SavedModel exported to {}'.format(second_model_path))
###Output
_____no_output_____
###Markdown
Next we need to create a model archive file containing the exported model. Create a model archive fileSageMaker models need to be packaged in `.tar.gz` files. When your endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the endpoint.
###Code
!tar -C "$PWD" -czf mobilenet.tar.gz mobilenet/
###Output
_____no_output_____
###Markdown
Upload the model archive file to S3We now have a suitable model archive ready in our notebook. We need to upload it to S3 before we can create a SageMaker Model that. We'll use the SageMaker Python SDK to handle the upload.
###Code
from sagemaker.session import Session
model_data = Session().upload_data(path='mobilenet.tar.gz', key_prefix='model')
print('model uploaded to: {}'.format(model_data))
###Output
_____no_output_____
###Markdown
Create a SageMaker Model and EndpointNow that the model archive is in S3, we can create a Model and deploy it to an Endpoint with a few lines of python code:
###Code
from sagemaker.tensorflow.serving import Model
# Use an env argument to set the name of the default model.
# This is optional, but recommended when you deploy multiple models
# so that requests that don't include a model name are sent to a
# predictable model.
env = {'SAGEMAKER_TFS_DEFAULT_MODEL_NAME': 'mobilenet_v2_140_224'}
model = Model(model_data=model_data, role=sagemaker_role, framework_version=1.11, env=env)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
###Output
_____no_output_____
###Markdown
Make predictions using the endpointThe endpoint is now up and running, and ready to handle inference requests. The `deploy` call above returned a `predictor` object. The `predict` method of this object handles sending requests to the endpoint. It also automatically handles JSON serialization of our input arguments, and JSON deserialization of the prediction results.We'll use these sample images:
###Code
# read the image files into a tensor (numpy array)
kitten_image = sample_utils.image_file_to_tensor('kitten.jpg')
# get a prediction from the endpoint
# the image input is automatically converted to a JSON request.
# the JSON response from the endpoint is returned as a python dict
result = predictor.predict(kitten_image)
# show the raw result
print(result)
###Output
_____no_output_____
###Markdown
Add class labels and show formatted resultsThe `sample_utils` module includes functions that can add Imagenet class labels to our results and print formatted output. Let's use them to get a better sense of how well our model worked on the input image.
###Code
# add class labels to the predicted result
sample_utils.add_imagenet_labels(result)
# show the probabilities and labels for the top predictions
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Optional: make predictions using the second modelIf you added the second model (`mobilenet_v2_035_224`) in the previous optional step, then you can also send prediction requests to that model. To do that, we'll need to create a new `predictor` object.Note: if you are using local mode (by changing the instance type to `local` or `local_gpu`), you'll need to create the new predictor this way instead:```predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224', sagemaker_session=predictor.sagemaker_session)```
###Code
from sagemaker.tensorflow.serving import Predictor
# use values from the default predictor to set up the new one
predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224')
# make a new prediction
bee_image = sample_utils.image_file_to_tensor('bee.jpg')
result = predictor2.predict(bee_image)
# show the formatted result
sample_utils.add_imagenet_labels(result)
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Additional InformationThe TensorFlow Serving Container supports additional features not covered in this notebook, including support for:- TensorFlow Serving REST API requests, including classify and regress requests- CSV input- Other JSON formatsFor information on how to use these features, refer to the documentation in the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Cleaning upTo avoid incurring charges to your AWS account for the resources used in this tutorial, you need to delete the SageMaker Endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Using the SageMaker TensorFlow Serving ContainerThe [SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) makes it easy to deploy trained TensorFlow models to a SageMaker Endpoint without the need for any custom model loading or inference code.In this example, we will show how deploy one or more pre-trained models from [TensorFlow Hub](https://www.tensorflow.org/hub/) to a SageMaker Endpoint using the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk), and then use the model(s) to perform inference requests. SetupFirst, we need to ensure we have an up-to-date version of the SageMaker Python SDK, and install a fewadditional python packages.
###Code
!pip install -U --quiet "sagemaker>=1.14.2"
!pip install -U --quiet opencv-python tensorflow-hub
###Output
_____no_output_____
###Markdown
Next, we'll get the IAM execution role from our notebook environment, so that SageMaker can access resources in your AWS account later in the example.
###Code
from sagemaker import get_execution_role
sagemaker_role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download and prepare a model from TensorFlow HubThe TensorFlow Serving Container works with any model stored in TensorFlow's [SavedModel format](https://www.tensorflow.org/guide/saved_model). This could be the output of your own training job or a model trained elsewhere. For this example, we will use a pre-trained version of the MobileNet V2 image classification model from [TensorFlow Hub](https://tfhub.dev/).The TensorFlow Hub models are pre-trained, but do not include a serving ``signature_def``, so we'll need to load the model into a TensorFlow session, define the input and output layers, and export it as a SavedModel. There is a helper function in this notebook's `sample_utils.py` module that will do that for us.
###Code
import sample_utils
model_name = 'mobilenet_v2_140_224'
export_path = 'mobilenet'
model_path = sample_utils.tfhub_to_savedmodel(model_name, export_path)
print('SavedModel exported to {}'.format(model_path))
###Output
_____no_output_____
###Markdown
After exporting the model, we can inspect it using TensorFlow's ``saved_model_cli`` command. In the command output, you should see ```MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:...```The command output should also show details of the model inputs and outputs.
###Code
!saved_model_cli show --all --dir {model_path}
###Output
_____no_output_____
###Markdown
Optional: add a second modelThe TensorFlow Serving container can host multiple models, if they are packaged in the same model archive file. Let's prepare a second version of the MobileNet model so we can demonstrate this. The `mobilenet_v2_035_224` model is a shallower version of MobileNetV2 that trades accuracy for smaller model size and faster computation, but has the same inputs and outputs.
###Code
second_model_name = 'mobilenet_v2_035_224'
second_model_path = sample_utils.tfhub_to_savedmodel(second_model_name, export_path)
print('SavedModel exported to {}'.format(second_model_path))
###Output
_____no_output_____
###Markdown
Next we need to create a model archive file containing the exported model. Create a model archive fileSageMaker models need to be packaged in `.tar.gz` files. When your endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the endpoint.
###Code
!tar -C "$PWD" -czf mobilenet.tar.gz mobilenet/
###Output
_____no_output_____
###Markdown
Upload the model archive file to S3We now have a suitable model archive ready in our notebook. We need to upload it to S3 before we can create a SageMaker Model that. We'll use the SageMaker Python SDK to handle the upload.
###Code
from sagemaker.session import Session
model_data = Session().upload_data(path='mobilenet.tar.gz', key_prefix='model')
print('model uploaded to: {}'.format(model_data))
###Output
_____no_output_____
###Markdown
Create a SageMaker Model and EndpointNow that the model archive is in S3, we can create a Model and deploy it to an Endpoint with a few lines of python code:
###Code
from sagemaker.tensorflow.serving import Model
# Use an env argument to set the name of the default model.
# This is optional, but recommended when you deploy multiple models
# so that requests that don't include a model name are sent to a
# predictable model.
env = {'SAGEMAKER_TFS_DEFAULT_MODEL_NAME': 'mobilenet_v2_140_224'}
model = Model(model_data=model_data, role=sagemaker_role, framework_version='1.15.2', env=env)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
###Output
_____no_output_____
###Markdown
Make predictions using the endpointThe endpoint is now up and running, and ready to handle inference requests. The `deploy` call above returned a `predictor` object. The `predict` method of this object handles sending requests to the endpoint. It also automatically handles JSON serialization of our input arguments, and JSON deserialization of the prediction results.We'll use these sample images:
###Code
# read the image files into a tensor (numpy array)
kitten_image = sample_utils.image_file_to_tensor('kitten.jpg')
# get a prediction from the endpoint
# the image input is automatically converted to a JSON request.
# the JSON response from the endpoint is returned as a python dict
result = predictor.predict(kitten_image)
# show the raw result
print(result)
###Output
_____no_output_____
###Markdown
Add class labels and show formatted resultsThe `sample_utils` module includes functions that can add Imagenet class labels to our results and print formatted output. Let's use them to get a better sense of how well our model worked on the input image.
###Code
# add class labels to the predicted result
sample_utils.add_imagenet_labels(result)
# show the probabilities and labels for the top predictions
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Optional: make predictions using the second modelIf you added the second model (`mobilenet_v2_035_224`) in the previous optional step, then you can also send prediction requests to that model. To do that, we'll need to create a new `predictor` object.Note: if you are using local mode (by changing the instance type to `local` or `local_gpu`), you'll need to create the new predictor this way instead:```predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224', sagemaker_session=predictor.sagemaker_session)```
###Code
from sagemaker.tensorflow.serving import Predictor
# use values from the default predictor to set up the new one
predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224')
# make a new prediction
bee_image = sample_utils.image_file_to_tensor('bee.jpg')
result = predictor2.predict(bee_image)
# show the formatted result
sample_utils.add_imagenet_labels(result)
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Additional InformationThe TensorFlow Serving Container supports additional features not covered in this notebook, including support for:- TensorFlow Serving REST API requests, including classify and regress requests- CSV input- Other JSON formatsFor information on how to use these features, refer to the documentation in the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Cleaning upTo avoid incurring charges to your AWS account for the resources used in this tutorial, you need to delete the SageMaker Endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Using the SageMaker TensorFlow Serving ContainerThe [SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) makes it easy to deploy trained TensorFlow models to a SageMaker Endpoint without the need for any custom model loading or inference code.In this example, we will show how deploy one or more pre-trained models from [TensorFlow Hub](https://www.tensorflow.org/hub/) to a SageMaker Endpoint using the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk), and then use the model(s) to perform inference requests. SetupFirst, we need to ensure we have an up-to-date version of the SageMaker Python SDK, and install a fewadditional python packages.
###Code
!pip install -U --quiet "sagemaker>=1.14.2"
!pip install -U --quiet opencv-python tensorflow-hub
###Output
_____no_output_____
###Markdown
Next, we'll get the IAM execution role from our notebook environment, so that SageMaker can access resources in your AWS account later in the example.
###Code
from sagemaker import get_execution_role
sagemaker_role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download and prepare a model from TensorFlow HubThe TensorFlow Serving Container works with any model stored in TensorFlow's [SavedModel format](https://www.tensorflow.org/guide/saved_model). This could be the output of your own training job or a model trained elsewhere. For this example, we will use a pre-trained version of the MobileNet V2 image classification model from [TensorFlow Hub](https://tfhub.dev/).The TensorFlow Hub models are pre-trained, but do not include a serving ``signature_def``, so we'll need to load the model into a TensorFlow session, define the input and output layers, and export it as a SavedModel. There is a helper function in this notebook's `sample_utils.py` module that will do that for us.
###Code
import sample_utils
model_name = 'mobilenet_v2_140_224'
export_path = 'mobilenet'
model_path = sample_utils.tfhub_to_savedmodel(model_name, export_path)
print('SavedModel exported to {}'.format(model_path))
###Output
_____no_output_____
###Markdown
After exporting the model, we can inspect it using TensorFlow's ``saved_model_cli`` command. In the command output, you should see ```MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:...```The command output should also show details of the model inputs and outputs.
###Code
!saved_model_cli show --all --dir {model_path}
###Output
_____no_output_____
###Markdown
Optional: add a second modelThe TensorFlow Serving container can host multiple models, if they are packaged in the same model archive file. Let's prepare a second version of the MobileNet model so we can demonstrate this. The `mobilenet_v2_035_224` model is a shallower version of MobileNetV2 that trades accuracy for smaller model size and faster computation, but has the same inputs and outputs.
###Code
second_model_name = 'mobilenet_v2_035_224'
second_model_path = sample_utils.tfhub_to_savedmodel(second_model_name, export_path)
print('SavedModel exported to {}'.format(second_model_path))
###Output
_____no_output_____
###Markdown
Next we need to create a model archive file containing the exported model. Create a model archive fileSageMaker models need to be packaged in `.tar.gz` files. When your endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the endpoint.
###Code
!tar -C "$PWD" -czf mobilenet.tar.gz mobilenet/
###Output
_____no_output_____
###Markdown
Upload the model archive file to S3We now have a suitable model archive ready in our notebook. We need to upload it to S3 before we can create a SageMaker Model that. We'll use the SageMaker Python SDK to handle the upload.
###Code
from sagemaker.session import Session
model_data = Session().upload_data(path='mobilenet.tar.gz', key_prefix='model')
print('model uploaded to: {}'.format(model_data))
###Output
_____no_output_____
###Markdown
Create a SageMaker Model and EndpointNow that the model archive is in S3, we can create a Model and deploy it to an Endpoint with a few lines of python code:
###Code
from sagemaker.tensorflow.serving import Model
# Use an env argument to set the name of the default model.
# This is optional, but recommended when you deploy multiple models
# so that requests that don't include a model name are sent to a
# predictable model.
env = {'SAGEMAKER_TFS_DEFAULT_MODEL_NAME': 'mobilenet_v2_140_224'}
model = Model(model_data=model_data, role=sagemaker_role, framework_version='1.14', env=env)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
###Output
_____no_output_____
###Markdown
Make predictions using the endpointThe endpoint is now up and running, and ready to handle inference requests. The `deploy` call above returned a `predictor` object. The `predict` method of this object handles sending requests to the endpoint. It also automatically handles JSON serialization of our input arguments, and JSON deserialization of the prediction results.We'll use these sample images:
###Code
# read the image files into a tensor (numpy array)
kitten_image = sample_utils.image_file_to_tensor('kitten.jpg')
# get a prediction from the endpoint
# the image input is automatically converted to a JSON request.
# the JSON response from the endpoint is returned as a python dict
result = predictor.predict(kitten_image)
# show the raw result
print(result)
###Output
_____no_output_____
###Markdown
Add class labels and show formatted resultsThe `sample_utils` module includes functions that can add Imagenet class labels to our results and print formatted output. Let's use them to get a better sense of how well our model worked on the input image.
###Code
# add class labels to the predicted result
sample_utils.add_imagenet_labels(result)
# show the probabilities and labels for the top predictions
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Optional: make predictions using the second modelIf you added the second model (`mobilenet_v2_035_224`) in the previous optional step, then you can also send prediction requests to that model. To do that, we'll need to create a new `predictor` object.Note: if you are using local mode (by changing the instance type to `local` or `local_gpu`), you'll need to create the new predictor this way instead:```predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224', sagemaker_session=predictor.sagemaker_session)```
###Code
from sagemaker.tensorflow.serving import Predictor
# use values from the default predictor to set up the new one
predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224')
# make a new prediction
bee_image = sample_utils.image_file_to_tensor('bee.jpg')
result = predictor2.predict(bee_image)
# show the formatted result
sample_utils.add_imagenet_labels(result)
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Additional InformationThe TensorFlow Serving Container supports additional features not covered in this notebook, including support for:- TensorFlow Serving REST API requests, including classify and regress requests- CSV input- Other JSON formatsFor information on how to use these features, refer to the documentation in the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Cleaning upTo avoid incurring charges to your AWS account for the resources used in this tutorial, you need to delete the SageMaker Endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Using the SageMaker TensorFlow Serving ContainerThe [SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) makes it easy to deploy trained TensorFlow models to a SageMaker Endpoint without the need for any custom model loading or inference code.In this example, we will show how deploy one or more pre-trained models from [TensorFlow Hub](https://www.tensorflow.org/hub/) to a SageMaker Endpoint using the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk), and then use the model(s) to perform inference requests. SetupFirst, we need to ensure we have an up-to-date version of the SageMaker Python SDK, and install a fewadditional python packages.
###Code
!pip install -U --quiet "sagemaker>=1.14.2"
!pip install -U --quiet opencv-python tensorflow-hub
###Output
_____no_output_____
###Markdown
Next, we'll get the IAM execution role from our notebook environment, so that SageMaker can access resources in your AWS account later in the example.
###Code
from sagemaker import get_execution_role
sagemaker_role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download and prepare a model from TensorFlow HubThe TensorFlow Serving Container works with any model stored in TensorFlow's [SavedModel format](https://www.tensorflow.org/guide/saved_model). This could be the output of your own training job or a model trained elsewhere. For this example, we will use a pre-trained version of the MobileNet V2 image classification model from [TensorFlow Hub](https://tfhub.dev/).The TensorFlow Hub models are pre-trained, but do not include a serving ``signature_def``, so we'll need to load the model into a TensorFlow session, define the input and output layers, and export it as a SavedModel. There is a helper function in this notebook's `sample_utils.py` module that will do that for us.
###Code
import sample_utils
model_name = 'mobilenet_v2_140_224'
export_path = 'mobilenet'
model_path = sample_utils.tfhub_to_savedmodel(model_name, export_path)
print('SavedModel exported to {}'.format(model_path))
###Output
_____no_output_____
###Markdown
After exporting the model, we can inspect it using TensorFlow's ``saved_model_cli`` command. In the command output, you should see ```MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:...```The command output should also show details of the model inputs and outputs.
###Code
!saved_model_cli show --all --dir {model_path}
###Output
_____no_output_____
###Markdown
Optional: add a second modelThe TensorFlow Serving container can host multiple models, if they are packaged in the same model archive file. Let's prepare a second version of the MobileNet model so we can demonstrate this. The `mobilenet_v2_035_224` model is a shallower version of MobileNetV2 that trades accuracy for smaller model size and faster computation, but has the same inputs and outputs.
###Code
second_model_name = 'mobilenet_v2_035_224'
second_model_path = sample_utils.tfhub_to_savedmodel(second_model_name, export_path)
print('SavedModel exported to {}'.format(second_model_path))
###Output
_____no_output_____
###Markdown
Next we need to create a model archive file containing the exported model. Create a model archive fileSageMaker models need to be packaged in `.tar.gz` files. When your endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the endpoint.
###Code
!tar -C "$PWD" -czf mobilenet.tar.gz mobilenet/
###Output
_____no_output_____
###Markdown
Upload the model archive file to S3We now have a suitable model archive ready in our notebook. We need to upload it to S3 before we can create a SageMaker Model that. We'll use the SageMaker Python SDK to handle the upload.
###Code
from sagemaker.session import Session
model_data = Session().upload_data(path='mobilenet.tar.gz', key_prefix='model')
print('model uploaded to: {}'.format(model_data))
###Output
_____no_output_____
###Markdown
Create a SageMaker Model and EndpointNow that the model archive is in S3, we can create a Model and deploy it to an Endpoint with a few lines of python code:
###Code
from sagemaker.tensorflow.serving import Model
# Use an env argument to set the name of the default model.
# This is optional, but recommended when you deploy multiple models
# so that requests that don't include a model name are sent to a
# predictable model.
env = {'SAGEMAKER_TFS_DEFAULT_MODEL_NAME': 'mobilenet_v2_140_224'}
model = Model(model_data=model_data, role=sagemaker_role, framework_version=1.12, env=env)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.c5.xlarge')
###Output
_____no_output_____
###Markdown
Make predictions using the endpointThe endpoint is now up and running, and ready to handle inference requests. The `deploy` call above returned a `predictor` object. The `predict` method of this object handles sending requests to the endpoint. It also automatically handles JSON serialization of our input arguments, and JSON deserialization of the prediction results.We'll use these sample images:
###Code
# read the image files into a tensor (numpy array)
kitten_image = sample_utils.image_file_to_tensor('kitten.jpg')
# get a prediction from the endpoint
# the image input is automatically converted to a JSON request.
# the JSON response from the endpoint is returned as a python dict
result = predictor.predict(kitten_image)
# show the raw result
print(result)
###Output
_____no_output_____
###Markdown
Add class labels and show formatted resultsThe `sample_utils` module includes functions that can add Imagenet class labels to our results and print formatted output. Let's use them to get a better sense of how well our model worked on the input image.
###Code
# add class labels to the predicted result
sample_utils.add_imagenet_labels(result)
# show the probabilities and labels for the top predictions
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Optional: make predictions using the second modelIf you added the second model (`mobilenet_v2_035_224`) in the previous optional step, then you can also send prediction requests to that model. To do that, we'll need to create a new `predictor` object.Note: if you are using local mode (by changing the instance type to `local` or `local_gpu`), you'll need to create the new predictor this way instead:```predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224', sagemaker_session=predictor.sagemaker_session)```
###Code
from sagemaker.tensorflow.serving import Predictor
# use values from the default predictor to set up the new one
predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224')
# make a new prediction
bee_image = sample_utils.image_file_to_tensor('bee.jpg')
result = predictor2.predict(bee_image)
# show the formatted result
sample_utils.add_imagenet_labels(result)
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Additional InformationThe TensorFlow Serving Container supports additional features not covered in this notebook, including support for:- TensorFlow Serving REST API requests, including classify and regress requests- CSV input- Other JSON formatsFor information on how to use these features, refer to the documentation in the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Cleaning upTo avoid incurring charges to your AWS account for the resources used in this tutorial, you need to delete the SageMaker Endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Using the SageMaker TensorFlow Serving ContainerThe [SageMaker TensorFlow Serving Container](https://github.com/aws/sagemaker-tensorflow-serving-container) makes it easy to deploy trained TensorFlow models to a SageMaker Endpoint without the need for any custom model loading or inference code.In this example, we will show how deploy one or more pre-trained models from [TensorFlow Hub](https://www.tensorflow.org/hub/) to a SageMaker Endpoint using the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk), and then use the model(s) to perform inference requests. Next, we'll get the IAM execution role from our notebook environment, so that SageMaker can access resources in your AWS account later in the example.
###Code
from sagemaker import get_execution_role
sagemaker_role = get_execution_role()
###Output
_____no_output_____
###Markdown
Download and prepare a model from TensorFlow HubThe TensorFlow Serving Container works with any model stored in TensorFlow's [SavedModel format](https://www.tensorflow.org/guide/saved_model). This could be the output of your own training job or a model trained elsewhere. For this example, we will use a pre-trained version of the MobileNet V2 image classification model from [TensorFlow Hub](https://tfhub.dev/).The TensorFlow Hub models are pre-trained, but do not include a serving ``signature_def``, so we'll need to load the model into a TensorFlow session, define the input and output layers, and export it as a SavedModel. There is a helper function in this notebook's `sample_utils.py` module that will do that for us.
###Code
import sample_utils
model_name = "mobilenet_v2_140_224"
export_path = "mobilenet"
model_path = sample_utils.tfhub_to_savedmodel(model_name, export_path)
print("SavedModel exported to {}".format(model_path))
###Output
_____no_output_____
###Markdown
After exporting the model, we can inspect it using TensorFlow's ``saved_model_cli`` command. In the command output, you should see ```MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']:...```The command output should also show details of the model inputs and outputs.
###Code
!saved_model_cli show --all --dir {model_path}
###Output
_____no_output_____
###Markdown
Optional: add a second modelThe TensorFlow Serving container can host multiple models, if they are packaged in the same model archive file. Let's prepare a second version of the MobileNet model so we can demonstrate this. The `mobilenet_v2_035_224` model is a shallower version of MobileNetV2 that trades accuracy for smaller model size and faster computation, but has the same inputs and outputs.
###Code
second_model_name = "mobilenet_v2_035_224"
second_model_path = sample_utils.tfhub_to_savedmodel(second_model_name, export_path)
print("SavedModel exported to {}".format(second_model_path))
###Output
_____no_output_____
###Markdown
Next we need to create a model archive file containing the exported model. Create a model archive fileSageMaker models need to be packaged in `.tar.gz` files. When your endpoint is provisioned, the files in the archive will be extracted and put in `/opt/ml/model/` on the endpoint.
###Code
!tar -C "$PWD" -czf mobilenet.tar.gz mobilenet/
###Output
_____no_output_____
###Markdown
Upload the model archive file to S3We now have a suitable model archive ready in our notebook. We need to upload it to S3 before we can create a SageMaker Model that. We'll use the SageMaker Python SDK to handle the upload.
###Code
from sagemaker.session import Session
model_data = Session().upload_data(path="mobilenet.tar.gz", key_prefix="model")
print("model uploaded to: {}".format(model_data))
###Output
_____no_output_____
###Markdown
Create a SageMaker Model and EndpointNow that the model archive is in S3, we can create a Model and deploy it to an Endpoint with a few lines of python code:
###Code
from sagemaker.tensorflow.serving import Model
# Use an env argument to set the name of the default model.
# This is optional, but recommended when you deploy multiple models
# so that requests that don't include a model name are sent to a
# predictable model.
env = {"SAGEMAKER_TFS_DEFAULT_MODEL_NAME": "mobilenet_v2_140_224"}
model = Model(model_data=model_data, role=sagemaker_role, framework_version="1.15.2", env=env)
predictor = model.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")
###Output
_____no_output_____
###Markdown
Make predictions using the endpointThe endpoint is now up and running, and ready to handle inference requests. The `deploy` call above returned a `predictor` object. The `predict` method of this object handles sending requests to the endpoint. It also automatically handles JSON serialization of our input arguments, and JSON deserialization of the prediction results.We'll use these sample images:
###Code
# read the image files into a tensor (numpy array)
kitten_image = sample_utils.image_file_to_tensor("kitten.jpg")
# get a prediction from the endpoint
# the image input is automatically converted to a JSON request.
# the JSON response from the endpoint is returned as a python dict
result = predictor.predict(kitten_image)
# show the raw result
print(result)
###Output
_____no_output_____
###Markdown
Add class labels and show formatted resultsThe `sample_utils` module includes functions that can add Imagenet class labels to our results and print formatted output. Let's use them to get a better sense of how well our model worked on the input image.
###Code
# add class labels to the predicted result
sample_utils.add_imagenet_labels(result)
# show the probabilities and labels for the top predictions
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Optional: make predictions using the second modelIf you added the second model (`mobilenet_v2_035_224`) in the previous optional step, then you can also send prediction requests to that model. To do that, we'll need to create a new `predictor` object.Note: if you are using local mode (by changing the instance type to `local` or `local_gpu`), you'll need to create the new predictor this way instead:```predictor2 = Predictor(predictor.endpoint, model_name='mobilenet_v2_035_224', sagemaker_session=predictor.sagemaker_session)```
###Code
from sagemaker.tensorflow.serving import Predictor
# use values from the default predictor to set up the new one
predictor2 = Predictor(predictor.endpoint, model_name="mobilenet_v2_035_224")
# make a new prediction
bee_image = sample_utils.image_file_to_tensor("bee.jpg")
result = predictor2.predict(bee_image)
# show the formatted result
sample_utils.add_imagenet_labels(result)
sample_utils.print_probabilities_and_labels(result)
###Output
_____no_output_____
###Markdown
Additional InformationThe TensorFlow Serving Container supports additional features not covered in this notebook, including support for:- TensorFlow Serving REST API requests, including classify and regress requests- CSV input- Other JSON formatsFor information on how to use these features, refer to the documentation in the [SageMaker Python SDK](https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst). Cleaning upTo avoid incurring charges to your AWS account for the resources used in this tutorial, you need to delete the SageMaker Endpoint.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____ |
EvoMusicCompanion/jupyter/simulation-runs.ipynb | ###Markdown
Get corpus
###Code
coreCorpus = music21.corpus.corpora.CoreCorpus()
curr_corpus = music21.corpus.search('mozart', 'composer')
scores = []
for c in curr_corpus:
score = c.parse()
score = util.transpose_piece(score, 'C')
scores.append(score)
lakh_corpus = corpus.corpora.LocalCorpus('lakh')
curr_corpus = lakh_corpus.metadataBundle
scores = []
for c in curr_corpus:
score = c.parse()
score = util.transpose_piece(score, 'C')
scores.append(score)
wiki_corpus = corpus.corpora.LocalCorpus('wiki')
curr_corpus = wiki_corpus.metadataBundle
scores = []
for c in curr_corpus:
score = c.parse()
score = util.transpose_piece(score, 'C')
scores.append(score)
folk_corpus = [corpus.getComposer('essenFolksong')[11]]
folk_corpus
folk_corpus = corpus.getComposer('essenFolksong')
scores = []
for p in folk_corpus:
score = converter.parse(p)
score = util.transpose_piece(score, 'C')
scores.append(score)
print(len(scores))
real_scores
real_scores[0].metadata.title
real_scores = []
for s in scores:
if s is None:
continue
for p in s.scores:
if p is not None:
p_t = util.transpose_piece(p, 'C')
real_scores.append(p_t)
notes = modelTrainer.flatten(modelTrainer.get_pitches_per_score(real_scores))
bigram = modelTrainer.get_probabilistic_matrix(modelTrainer.get_bigram_matrix(notes))
trigram = modelTrainer.get_probabilistic_matrix(modelTrainer.get_trigram_matrix(notes))
trigram.to_csv('./folk_trigram.csv')
bigram.to_csv('./folk_bigram.csv')
duration_matrix = modelTrainer.train_duration_matrix(real_scores)
duration_matrix.to_csv('./folk_duration_matrix.csv')
for n1_n2 in trigram:
total_count = sum(trigram[n1_n2])
for n3 in trigram[n1_n2].keys():
if trigram[n1_n2][n3] != 0.0:
trigram[n1_n2][n3] /= total_count
s = real_scores[0]
notes = s.parts[0].getElementsByClass(stream.Measure).flat.notesAndRests
for n in notes:
print(n.duration.type)
import csv
import os
header = ["I", "NAME", "COMPOSER", "FITNESS", "C_TONE", "C_TONE_B", "CADENCE", "L_NOTE", "I_RES", "L_INT", "L_DUR",
"CONS_R", "CONS_N", "PATTERN_D", "PATTERN_SD"]
file = f'./corpus-data-c.csv'
counter = -1
with open(file, mode='w', encoding="utf-8") as f:
writer = csv.writer(f, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL, lineterminator = '\n')
writer.writerow(header)
for s in real_scores:
counter += 1
if s is None:
continue
piece = musicPlayer.music21_score_to_individual(s)
if piece is None:
continue
row = [
counter,
s.metadata.title,
s.metadata.composer,
piece.fitness,
piece.fitnesses["C_TONE"],
piece.fitnesses["C_TONE_B"],
piece.fitnesses["CADENCE"],
piece.fitnesses["L_NOTE"],
piece.fitnesses["I_RES"],
piece.fitnesses["L_INT"],
piece.fitnesses["L_DUR"],
piece.fitnesses["CONS_R"],
piece.fitnesses["CONS_N"],
piece.fitnesses["PATTERN_D"],
piece.fitnesses["PATTERN_SD"],
]
writer.writerow(row)
musicPlayer.play_music_xml(real_scores[5553])
len(real_scores)
for n in real_scores[654].parts[0].getElementsByClass("Note"):
print(n)
s = util.transpose_piece(real_scores[2200], 'C')
s.analyze('key')
ind = musicPlayer.music21_score_to_individual(real_scores[5548])
ind.measures = ind.measures[1:13]
initialisation.set_chords(ind)
fitness.set_fitness(ind)
ind.fitnesses
musicPlayer.play_music_xml([ind])
fitness.cadence(ind)
real_scores[2200].parts[0].getElementsByClass("Measure")[1:12]
s = Score()
for i in range(len(individuals)):
print(f'Score {i}: {curr_corpus[i]}')
fitness.print_fitness_values(individuals[i])
print('-----------------')
score = music21.corpus.parse('joplin/maple_leaf_rag.mxl')
scores = util.transpose_piece(score, 'C')
scores = [scores]
real_scores[160].show('musicxml')
###Output
_____no_output_____
###Markdown
Graf und Nonnei = 1600"SUBRATER_TARGET_VALUE": { "C_TONE": 0.3339285714285714, "C_TONE_B": -0.25, "CADENCE": -1.0, "L_NOTE": 0.06521739130434782, "I_RES": 0.0, "L_INT": -0.05714285714285714, "L_DUR": -0.0, "CONS_R": -0.125, "CONS_N": 0.0, "PATTERN_D": 0.35996190558334773, "PATTERN_SD": 0.47017013853776923}SO OFT ICH MEINE TABACKSPFEIFEi = 2200{'C_TONE': 0.6509920634920635, 'C_TONE_B': -0.125, 'CADENCE': 1.0, 'L_NOTE': 0.5882352941176471, 'I_RES': 0.08695652173913043, 'L_INT': 0.0, 'L_DUR': -0.014925373134328358, 'CONS_R': 0.0, 'CONS_N': -0.25545634920634924, 'PATTERN_D': 0.8006289208103725, 'PATTERN_SD': 0.8607966145638791 } The Maid of Ballydoo i = 5548{'C_TONE': 0.6226190476190476, 'C_TONE_B': 0.4375, 'CADENCE': 0.5, 'L_NOTE': 1.0, 'I_RES': 0.23333333333333334, 'L_INT': -0.017241379310344827, 'L_DUR': -0.05172413793103448, 'CONS_R': 0.0, 'CONS_N': -0.14107142857142857, 'PATTERN_D': 0.8673507119237011, 'PATTERN_SD': 0.15332107426943575}
###Code
dur_matrix = modelTrainer.train_duration_matrix(scores)
pitch_matrix = modelTrainer.train_pitch_matrix(scores);
sim = simulation.Simulation(0.5, 100)
population = sim.run_interactively()
sim.run(20, None, None)
sim.pitch_matrix
musicPlayer.play(population[0:4])
###Output
_____no_output_____ |
examples/LCOMBS/testing.ipynb | ###Markdown
Predicting Abalone Snail Sex Using Physical Characteristics Data was found at https://archive.ics.uci.edu/ml/datasets/Abalone , University of California, Irvine's Machine Learning repository.
###Code
import os
import sys
# Modify the path
sys.path.append("..")
import numpy as np
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
os.chdir("/Users/lisacombs/Documents/yellowbrick/")
## Load the data
data = pd.read_csv("./life.csv")
data.head()
# Use only M/F, no infants and make variable numeric.
data = data.loc[data['sex'].isin(['M','F'])]
data['sex'] = np.where(data['sex']=='M', 0, 1)
# Feature Analysis Imports
# NOTE that all these are available for import from the `yellowbrick.features` module
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
from yellowbrick.features.pcoords import ParallelCoordinates
list(data) # numeric variables to be used as features
# Specify the features of interest
features = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.sex.as_matrix()
# Instantiate the visualizer with the Covariance ranking algorithm
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Instantiate the visualizer with the Pearson ranking algorithm
visualizer = Rank2D(features=features, algorithm='pearson')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Specify the features of interest and the classes of the target
features = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
classes = ['M', 'F']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.sex.as_matrix()
# Instantiate the visualizer
visualizer = visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Instantiate the visualizer
visualizer = visualizer = ParallelCoordinates(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
# Regression Evaluation Imports
from sklearn.linear_model import Ridge, Lasso
from sklearn.model_selection import train_test_split
from yellowbrick.regressor import PredictionError, ResidualsPlot
# Load the data - without classifier
feature_names = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
target_name = ' sh_weight'
# Get the X and y data from the DataFrame
X = data[feature_names].as_matrix()
y = data[target_name].as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the linear model and visualizer
ridge = Ridge()
visualizer = ResidualsPlot(ridge)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
feature_names = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight']
target_name = ' rings'
# Get the X and y data from the DataFrame
X = data[feature_names].as_matrix()
y = data[target_name].as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the linear model and visualizer
lasso = Lasso()
visualizer = PredictionError(lasso)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
# Classifier Evaluation Imports
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ROCAUC, ClassBalance
# Specify the features of interest and the classes of the target
features = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
classes = ['M', 'F']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.sex.as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the classification model and visualizer
bayes = GaussianNB()
visualizer = ClassificationReport(bayes, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
# Specify the features of interest and the classes of the target
features = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
classes = ['M', 'F']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.sex.as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
logistic = LogisticRegression()
visualizer = ROCAUC(logistic)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
# Specify the features of interest and the classes of the target
features = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
classes = ['M', 'F']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.sex.as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the classification model and visualizer
forest = RandomForestClassifier()
visualizer = ClassBalance(forest, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
###Output
_____no_output_____
###Markdown
Predicting Abalone Snail Sex Using Physical Characteristics Data was found at https://archive.ics.uci.edu/ml/datasets/Abalone , University of California, Irvine's Machine Learning repository.
###Code
import os
import sys
# Modify the path
sys.path.append("..")
import numpy as np
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
os.chdir("/Users/lisacombs/Documents/yellowbrick/")
## Load the data
data = pd.read_csv("./life.csv")
data.head()
# Use only M/F, no infants and make variable numeric.
data = data.loc[data['sex'].isin(['M','F'])]
data['sex'] = np.where(data['sex']=='M', 0, 1)
# Feature Analysis Imports
# NOTE that all these are available for import from the `yellowbrick.features` module
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
from yellowbrick.features.pcoords import ParallelCoordinates
list(data) # numeric variables to be used as features
# Specify the features of interest
features = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.sex.as_matrix()
# Instantiate the visualizer with the Covariance ranking algorithm
visualizer = Rank2D(features=features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
# Instantiate the visualizer with the Pearson ranking algorithm
visualizer = Rank2D(features=features, algorithm='pearson')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
# Specify the features of interest and the classes of the target
features = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
classes = ['M', 'F']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.sex.as_matrix()
# Instantiate the visualizer
visualizer = visualizer = RadViz(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
# Instantiate the visualizer
visualizer = visualizer = ParallelCoordinates(classes=classes, features=features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.show() # Draw/show/show the data
# Regression Evaluation Imports
from sklearn.linear_model import Ridge, Lasso
from sklearn.model_selection import train_test_split
from yellowbrick.regressor import PredictionError, ResidualsPlot
# Load the data - without classifier
feature_names = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
target_name = ' sh_weight'
# Get the X and y data from the DataFrame
X = data[feature_names].as_matrix()
y = data[target_name].as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the linear model and visualizer
ridge = Ridge()
visualizer = ResidualsPlot(ridge)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.show() # Draw/show/show the data
feature_names = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight']
target_name = ' rings'
# Get the X and y data from the DataFrame
X = data[feature_names].as_matrix()
y = data[target_name].as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the linear model and visualizer
lasso = Lasso()
visualizer = PredictionError(lasso)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.show() # Draw/show/show the data
# Classifier Evaluation Imports
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ROCAUC, ClassBalance
# Specify the features of interest and the classes of the target
features = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
classes = ['M', 'F']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.sex.as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the classification model and visualizer
bayes = GaussianNB()
visualizer = ClassificationReport(bayes, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.show() # Draw/show/show the data
# Specify the features of interest and the classes of the target
features = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
classes = ['M', 'F']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.sex.as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
logistic = LogisticRegression()
visualizer = ROCAUC(logistic)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.show() # Draw/show/show the data
# Specify the features of interest and the classes of the target
features = [' length',
' diameter',
' height',
' w_weight',
' s_weight',
' v_weight',
' sh_weight',
' rings']
classes = ['M', 'F']
# Extract the numpy arrays from the data frame
X = data[features].as_matrix()
y = data.sex.as_matrix()
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the classification model and visualizer
forest = RandomForestClassifier()
visualizer = ClassBalance(forest, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.show() # Draw/show/show the data
###Output
_____no_output_____ |
data/isosws_atlas/preprocess/step0_fits_to_pkl/save_old/isosws_write_pickles.ipynb | ###Markdown
ISO-SWS data preprocessing: convert to pickled dataframes
###Code
import glob
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.io import fits
from astropy.table import Table
from IPython.core.debugger import set_trace as st
from scipy.interpolate import splev, splrep
from spectres import spectres
def downsample_to_cassis_interpolate(df):
"""Downsample to match the wavelength grid of CASSIS."""
wave = df['wavelength']
flux = df['flux']
spec_error = df['spec_error']
norm_error = df['norm_error']
def spline(x, y, new_x):
spline_model = splrep(x=x, y=y)
new_y = splev(x=new_x, tck=spline_model)
return new_y
new_wave = cassis_wave
new_flux = spline(wave, flux, new_wave)
new_spec_error = spline(wave, spec_error, new_wave)
new_norm_error = spline(wave, norm_error, new_wave)
col_stack = np.column_stack([new_wave, new_flux, new_spec_error, new_norm_error])
col_names = ['wavelength', 'flux', 'spec_error', 'norm_error']
df2 = pd.DataFrame(col_stack, columns=col_names)
return df2
def downsample_to_cassis_spectres(df):
"""Downsample to match the wavelength grid of CASSIS."""
def spline(x, y, new_x):
spline_model = splrep(x=x, y=y)
new_y = splev(x=new_x, tck=spline_model)
return new_y
wave = df['wavelength']
flux = df['flux']
spec_error = df['spec_error']
norm_error = df['norm_error']
wave = wave.values
flux = flux.values
spec_error = spec_error.values
norm_error = norm_error.values
new_wave = cassis_wave
new_flux, new_spec_error = spectres(new_spec_wavs=new_wave, old_spec_wavs=wave,
spec_fluxes=flux, spec_errs=spec_error)
_, new_norm_error = spectres(new_spec_wavs=new_wave, old_spec_wavs=wave,
spec_fluxes=flux, spec_errs=norm_error)
col_stack = np.column_stack([new_wave, new_flux, new_spec_error, new_norm_error])
col_names = ['wavelength', 'flux', 'spec_error', 'norm_error']
df2 = pd.DataFrame(col_stack, columns=col_names)
return df2
# Some useful functions....
cassis_wave = np.loadtxt('isosws_misc/cassis_wavelength_grid.txt', delimiter=',')
def convert_fits_to_pickle(path, verify_pickle=False, verbose=False, match_cassis_wavegrid=False):
"""Full conversion from ISO-SWS <filename.fits to <filename>.pkl, which contains a pd.DataFrame.
Args:
path (str): Path to <filename>.fits file (of an ISO-SWS observation).
verify_pickle (bool): Confirm the pickle was succesful created; does so by comparing the
pd.DataFrame before and after writing the pickle.
Returns:
True if successful.
Note:
DataFrame can be retrieved from the pickle by, e.g., df = pd.read_pickle(pickle_path).
"""
if verbose:
print('Pickling: ', path)
# Convert .fits file to pandas DataFrame, header.Header object.
try:
df, header = isosws_fits_to_dataframe(path)
except Exception as e:
raise(e)
# Downsample to match the CASSIS wavegrid if desired.
if match_cassis_wavegrid:
# df = downsample_to_cassis_interpolate(df)
df = downsample_to_cassis_spectres(df)
# Determine the pickle_path to save to. Being explicit here to 'pickle_path' is clear.
base_filename = path.replace('.fit', '.pkl').split('/')[-1]
# Save the dataframe to a pickle.
pickle_path = 'spectra/' + base_filename
df.to_pickle(pickle_path)
if verbose:
print('...saved: ', pickle_path)
# Test dataframes for equality before/after pickling if verify_pickle == True.
if verify_pickle:
tmp_df = pd.read_pickle(pickle_path)
if df.equals(tmp_df):
if verbose:
print()
print('DataFrame integrity verified -- pickling went OK!')
print()
else:
raise ValueError('Dataframes not equal before/after pickling!')
return pickle_path
def isosws_fits_to_dataframe(path, test_for_monotonicity=True):
"""Take an ISO-SWS .fits file, return a pandas DataFrame containing the data (with labels) and astropy header.
Args:
path (str): Path of the .fits file (assumed to be an ISO-SWS observation file).
test_for_monotonicity (bool, optional): Check that the wavelength grid is monotinically increasing.
Returns:
df (pd.DataFrame): Pandas dataframe with appropriate labels (wavelength, flux, etc.).
header (astropy.io.fits.header.Header): Information about observation from telescope.
Note:
Header can be manipulated with, e.g., header.totextfile(some_path).
See http://docs.astropy.org/en/stable/io/fits/api/headers.html.
"""
def monotonically_increasing(array):
"""Test if a list has monotonically increasing elements. Thank you stack overflow."""
return all(x < y for x, y in zip(array, array[1:]))
# Read in .fits file.
hdu = fits.open(path)
# Retrieve the header object.
header = hdu[0].header
# Extract column labels/descriptions from header.
# Can't do this because the header is not well-defined. That's OK, hard-coded the new column names below.
# Convert data to pandas DataFrame.
dtable = Table(hdu[0].data)
df = dtable.to_pandas()
# Convert the nondescriptive column labels (e.g., 'col01def', 'col02def') to descriptive labels.
old_keys = list(df.keys())
new_keys = ['wavelength', 'flux', 'spec_error', 'norm_error']
mydict = dict(zip(old_keys, new_keys))
df = df.rename(columns=mydict) # Renamed DataFrame columns here.
if test_for_monotonicity:
if not monotonically_increasing(df['wavelength']):
raise ValueError('Wavelength array not monotonically increasing!', path)
return df, header
###Output
_____no_output_____
###Markdown
*** Find out how many files we're working with
###Code
spec_dir = 'fits_spectra/'
spec_files = np.sort(glob.glob(spec_dir + '*.fit'))
len(spec_files)
###Output
_____no_output_____
###Markdown
Build dataframe containing metadata (including labels) and paths to pickled files. Creates isosws_metadata_df.pkl.
###Code
# Only do this once.
recreate_meta_pickle = True
if recreate_meta_pickle:
def create_swsmeta_dataframe():
"""Create a dataframe that contains the metadata for the ISO-SWS Atlas."""
def simbad_results():
"""Create a dictionary of the SIMBAD object type query results."""
simbad_results = np.loadtxt('isosws_misc/simbad_type.csv', delimiter=';', dtype=str)
simbad_dict = dict(simbad_results)
return simbad_dict
def sexagesimal_to_degree(tupe):
"""Convert from hour:minute:second to degrees."""
sex_str = tupe[0] + ' ' + tupe[1]
c = SkyCoord(sex_str, unit=(u.hourangle, u.deg))
return c.ra.deg, c.dec.deg
def transform_ra_dec_into_degrees(df):
"""Perform full ra, dec conversion to degrees."""
ra = []
dec = []
for index, value in enumerate(zip(df['ra'], df['dec'])):
ra_deg, dec_deg = sexagesimal_to_degree(value)
ra.append(ra_deg)
dec.append(dec_deg)
df = df.assign(ra=ra)
df = df.assign(dec=dec)
return df
# Read in the metadata
# meta_filename = 'isosws_misc/kraemer_class.csv'
meta_filename = 'isosws_misc/kraemer_class_fixed.csv'
swsmeta = np.loadtxt(meta_filename, delimiter=';', dtype=str)
df = pd.DataFrame(swsmeta[1:], columns=swsmeta[0])
# Add a column for the pickle paths (dataframes with wave, flux, etc).
pickle_paths = ['spectra/' + x.zfill(8) + '_sws.pkl' for x in df['tdt']]
df = df.assign(file_path=pickle_paths)
# Add a column for SIMBAD type, need to query 'simbad_type.csv' for this. Not in order naturally...
object_names = df['object_name']
object_type_dict = simbad_results()
object_types = [object_type_dict.get(key, "empty") for key in object_names]
df = df.assign(object_type=object_types)
# Transform ra and dec into degrees.
df = transform_ra_dec_into_degrees(df)
# Remove rows of objects not pickled (typically due to a data error).
bool_list = []
for path in df['file_path']:
if os.path.isfile(path):
bool_list.append(True)
else:
bool_list.append(False)
df = df.assign(data_ok=bool_list)
df2 = df.query('data_ok == True')
return df2
df = create_swsmeta_dataframe()
df.reset_index(drop=True, inplace=True)
df.to_pickle('metadata.pkl')
df.head()
df
np.unique(df['group'].values)
df.describe()
mdf = pd.read_pickle('isosws_metadata_df.pkl')
# mdf
###Output
_____no_output_____
###Markdown
Convert spectra to dataframes and save to disk as pickles Requirement: must be in df['file_path']
###Code
perform_conversion = True
def is_in_meta_pickle(df, fits_file):
tdt = fits_file.split('/')[-1].split('_sws.fit')[0]
tdt_zfill = [x.zfill(8) for x in df['tdt'].values]
if tdt in tdt_zfill:
return True
else:
return False
# Note the break I've added; remove for full conversion.
if perform_conversion:
n_skipped = 0
print('=============================\nConverting fits files...\n=============================\n')
# Iterate over all the fits files and convert them.
for index, fits_file in enumerate(spec_files):
if not is_in_meta_pickle(df, fits_file):
print(fits_file, 'Skipping.')
n_skipped += 1
continue
# if index >= 22:
# break
if index % 50 == 0:
print(index, '/', len(spec_files))
try:
pickle_path = convert_fits_to_pickle(fits_file, verify_pickle=True,
verbose=False, match_cassis_wavegrid=True)
except Exception as e:
print(e)
print(fits_file, 'EXCEPTION!')
n_skipped += 1
continue
print('\n=============================\nComplete.\n=============================')
print('Number of spectra skipped due to missing monotonicity: ', n_skipped)
###Output
=============================
Converting fits files...
=============================
0 / 1262
fits_spectra/04800954_sws.fit Skipping.
fits_spectra/05601993_sws.fit Skipping.
50 / 1262
100 / 1262
150 / 1262
200 / 1262
250 / 1262
300 / 1262
fits_spectra/19900101_sws.fit Skipping.
350 / 1262
fits_spectra/24700418_sws.fit Skipping.
fits_spectra/24801029_sws.fit Skipping.
fits_spectra/25502252_sws.fit Skipping.
fits_spectra/25601404_sws.fit Skipping.
fits_spectra/26601410_sws.fit Skipping.
400 / 1262
fits_spectra/28100117_sws.fit Skipping.
fits_spectra/28604933_sws.fit Skipping.
fits_spectra/28702002_sws.fit Skipping.
450 / 1262
fits_spectra/29700401_sws.fit Skipping.
500 / 1262
fits_spectra/31901604_sws.fit Skipping.
fits_spectra/33100101_sws.fit Skipping.
fits_spectra/33201303_sws.fit Skipping.
550 / 1262
fits_spectra/33800505_sws.fit Skipping.
fits_spectra/33800604_sws.fit Skipping.
fits_spectra/36100832_sws.fit Skipping.
600 / 1262
fits_spectra/37501937_sws.fit Skipping.
650 / 1262
700 / 1262
fits_spectra/42500605_sws.fit Skipping.
750 / 1262
800 / 1262
850 / 1262
900 / 1262
950 / 1262
1000 / 1262
fits_spectra/66002132_sws.fit Skipping.
1050 / 1262
fits_spectra/71101311_sws.fit Skipping.
fits_spectra/72200302_sws.fit Skipping.
fits_spectra/72501593_sws.fit Skipping.
fits_spectra/75002201_sws.fit Skipping.
1150 / 1262
fits_spectra/82802566_sws.fit Skipping.
1200 / 1262
fits_spectra/86301602_sws.fit Skipping.
1250 / 1262
=============================
Complete.
=============================
Number of spectra skipped due to missing monotonicity: 27
###Markdown
*** *** *** Appendix A -- Example transformation from .fits to pd.dataframe Convert spectrum file to dataframe, header
###Code
# Grab the first file from the glob list.
test_spec = spec_files[0]
test_spec
# Read it in with astropy.io.fits, check dimensions.
test_hdu = fits.open(test_spec)
test_hdu.info()
# Utilize our defined function to transform a string of the .fits filename to a pandas dataframe and header.
# 'header' will be an astropy.io.fits.header.Header object; see a couple subsections below for conversion options.
df, header = isosws_fits_to_dataframe(test_spec)
###Output
_____no_output_____
###Markdown
Inspect dataframe
###Code
df.shape
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
Header from the .fits file
###Code
type(header)
# Uncomment below to see full header of one file as an example.
header
# Can convert to other formats if we want to use the header information for something.
# See http://docs.astropy.org/en/stable/io/fits/api/headers.html
# header_str = header.tostring()
# header.totextfile('test_header.csv')
###Output
_____no_output_____
###Markdown
Compare smoothing windows
###Code
wave = test_hdu[0].data.T[0]
flux = test_hdu[0].data.T[1]
fluxerr = test_hdu[0].data.T[2]
cassis_wave = np.loadtxt('isosws_misc/cassis_wavelength_grid.txt', delimiter=',')
from spectres import spectres
# spectres.spectres(new_spec_wavs, old_spec_wavs, spec_fluxes, spec_errs=None)
downsamp_wave = cassis_wave
downsamp_flux, downsamp_fluxerr = spectres(new_spec_wavs=cassis_wave, old_spec_wavs=wave, spec_fluxes=flux, spec_errs=fluxerr)
plt.plot(wave, flux, label='raw');
plt.plot(downsamp_wave, downsamp_flux, label='spectres downsamp')
# plt.plot(wave, smooth(flux, window_len=100), label='hanning');
# plt.plot(wave, smooth(flux, window_len=100), label='flat');
# plt.plot(wave, smooth(flux, window_len=100), label='hamming');
# plt.plot(wave, smooth(flux, window_len=100), label='bartlett');
# plt.plot(wave, smooth(flux, window_len=100), label='blackman');
# plt.xlim(xmin=10.45, xmax=10.65)
# plt.ylim(ymax=1500)|
plt.legend(loc=0)
np.all(downsamp_wave == cassis_wave)
###Output
_____no_output_____ |
docs/jupyter/geometry/pointcloud_outlier_removal.ipynb | ###Markdown
Point cloud outlier removalWhen collecting data from scanning devices, the resulting point cloud tends to contain noise and artifacts that one would like to remove. This tutorial addresses the outlier removal features of Open3D. Prepare input dataA point cloud is loaded and downsampled using `voxel_downsample`.
###Code
print("Load a ply point cloud, print it, and render it")
sample_pcd_data = o3d.data.SamplePointCloudPCD()
pcd = o3d.io.read_point_cloud(sample_pcd_data.path)
o3d.visualization.draw_geometries([pcd],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
print("Downsample the point cloud with a voxel of 0.02")
voxel_down_pcd = pcd.voxel_down_sample(voxel_size=0.02)
o3d.visualization.draw_geometries([voxel_down_pcd],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
###Output
_____no_output_____
###Markdown
Alternatively, use `uniform_down_sample` to downsample the point cloud by collecting every n-th points.
###Code
print("Every 5th points are selected")
uni_down_pcd = pcd.uniform_down_sample(every_k_points=5)
o3d.visualization.draw_geometries([uni_down_pcd],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
###Output
_____no_output_____
###Markdown
Select down sampleThe following helper function uses `select_by_index`, which takes a binary mask to output only the selected points. The selected points and the non-selected points are visualized.
###Code
def display_inlier_outlier(cloud, ind):
inlier_cloud = cloud.select_by_index(ind)
outlier_cloud = cloud.select_by_index(ind, invert=True)
print("Showing outliers (red) and inliers (gray): ")
outlier_cloud.paint_uniform_color([1, 0, 0])
inlier_cloud.paint_uniform_color([0.8, 0.8, 0.8])
o3d.visualization.draw_geometries([inlier_cloud, outlier_cloud],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
###Output
_____no_output_____
###Markdown
Statistical outlier removal`statistical_outlier_removal` removes points that are further away from their neighbors compared to the average for the point cloud. It takes two input parameters:- `nb_neighbors`, which specifies how many neighbors are taken into account in order to calculate the average distance for a given point.- `std_ratio`, which allows setting the threshold level based on the standard deviation of the average distances across the point cloud. The lower this number the more aggressive the filter will be.
###Code
print("Statistical oulier removal")
cl, ind = voxel_down_pcd.remove_statistical_outlier(nb_neighbors=20,
std_ratio=2.0)
display_inlier_outlier(voxel_down_pcd, ind)
###Output
_____no_output_____
###Markdown
Radius outlier removal`radius_outlier_removal` removes points that have few neighbors in a given sphere around them. Two parameters can be used to tune the filter to your data:- `nb_points`, which lets you pick the minimum amount of points that the sphere should contain.- `radius`, which defines the radius of the sphere that will be used for counting the neighbors.
###Code
print("Radius oulier removal")
cl, ind = voxel_down_pcd.remove_radius_outlier(nb_points=16, radius=0.05)
display_inlier_outlier(voxel_down_pcd, ind)
###Output
_____no_output_____
###Markdown
Point cloud outlier removalWhen collecting data from scanning devices, the resulting point cloud tends to contain noise and artifacts that one would like to remove. This tutorial addresses the outlier removal features of Open3D. Prepare input dataA point cloud is loaded and downsampled using `voxel_downsample`.
###Code
print("Load a ply point cloud, print it, and render it")
sample_pcd_data = o3d.data.PCDPointCloud()
pcd = o3d.io.read_point_cloud(sample_pcd_data.path)
o3d.visualization.draw_geometries([pcd],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
print("Downsample the point cloud with a voxel of 0.02")
voxel_down_pcd = pcd.voxel_down_sample(voxel_size=0.02)
o3d.visualization.draw_geometries([voxel_down_pcd],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
###Output
_____no_output_____
###Markdown
Alternatively, use `uniform_down_sample` to downsample the point cloud by collecting every n-th points.
###Code
print("Every 5th points are selected")
uni_down_pcd = pcd.uniform_down_sample(every_k_points=5)
o3d.visualization.draw_geometries([uni_down_pcd],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
###Output
_____no_output_____
###Markdown
Select down sampleThe following helper function uses `select_by_index`, which takes a binary mask to output only the selected points. The selected points and the non-selected points are visualized.
###Code
def display_inlier_outlier(cloud, ind):
inlier_cloud = cloud.select_by_index(ind)
outlier_cloud = cloud.select_by_index(ind, invert=True)
print("Showing outliers (red) and inliers (gray): ")
outlier_cloud.paint_uniform_color([1, 0, 0])
inlier_cloud.paint_uniform_color([0.8, 0.8, 0.8])
o3d.visualization.draw_geometries([inlier_cloud, outlier_cloud],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
###Output
_____no_output_____
###Markdown
Statistical outlier removal`statistical_outlier_removal` removes points that are further away from their neighbors compared to the average for the point cloud. It takes two input parameters:- `nb_neighbors`, which specifies how many neighbors are taken into account in order to calculate the average distance for a given point.- `std_ratio`, which allows setting the threshold level based on the standard deviation of the average distances across the point cloud. The lower this number the more aggressive the filter will be.
###Code
print("Statistical oulier removal")
cl, ind = voxel_down_pcd.remove_statistical_outlier(nb_neighbors=20,
std_ratio=2.0)
display_inlier_outlier(voxel_down_pcd, ind)
###Output
_____no_output_____
###Markdown
Radius outlier removal`radius_outlier_removal` removes points that have few neighbors in a given sphere around them. Two parameters can be used to tune the filter to your data:- `nb_points`, which lets you pick the minimum amount of points that the sphere should contain.- `radius`, which defines the radius of the sphere that will be used for counting the neighbors.
###Code
print("Radius oulier removal")
cl, ind = voxel_down_pcd.remove_radius_outlier(nb_points=16, radius=0.05)
display_inlier_outlier(voxel_down_pcd, ind)
###Output
_____no_output_____ |
RandomForests/RandomForests.ipynb | ###Markdown
Tutorial 3 Random Forests OverviewThis tutorial is based on work done by Chetan Deva on Using random forest to predict leaf temperature from a number of measurable features.Plants regulate temp in extreme environments. e.g. a plant in a desert can stay 18C cooler than air temp or 22 C warmer than air in mountains. Leaf temperature differs from air temperature. Plant growth and development is strongly dependent on leaf temperature. Most Land Surface Models (LSMs) & Crop growth models (CGMs) use air temperature as an approximation of leaf temperature.However, during time periods when large differences exist, this can be an important source of input data uncertainty.In this tutorial leaf data containing a number of features is fed into a random forest regression model to evaluate which features are the most important to accurately predict the leaf temperature differential. The background scientific paper for the science behind the Leaf tempurature of which this tutorial is based is [Still et al 2019](https://esajournals.onlinelibrary.wiley.com/doi/pdf/10.1002/ecs2.2768) Random Forests Random forests are ensembles of decision trees where each decision tree produces a prediction and an average is taken, in this tutorial we will first build a simple decision tree and then build on that use python libraries to easily create Random Forest models to investigate what features are important in determining leaf temperature Recommended reading * [Random Forest overview linked with python](https://towardsdatascience.com/an-implementation-and-explanation-of-the-random-forest-in-python-77bf308a9b76)* [Random Forests Computer Science overview paper](https://link.springer.com/article/10.1023/A:1010933404324) The very basics If you know nothing about machine learning and are finding the above links rather dry you might find the following youtube videos useful: Decision Trees *the cells below use ipython magics to embed youtube videos*
###Code
%%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/kakLu2is3ds" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
###Output
_____no_output_____
###Markdown
Random Forests
###Code
%%HTML
<iframe width="560" height="315" src="https://www.youtube.com/embed/v6VJ2RO66Ag" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
###Output
_____no_output_____
###Markdown
Python Basic python knowledge is assumed for this tutorial and this tutorial will use [SciKit-Learn](https://scikit-learn.org/stable/) which covers a wide variety of useful machine learning tools. All the following code should run quickly on a standard laptop. Requirements These notebooks should run with the following requirements satisfied Python Packages: * Python 3* scikit-learn* notebook* numpy* seaborn* matplotlib* pandas* statistics Data Requirements This notebook referes to some data included in the git hub repositroy **Contents:**1. [Leaf Data](Leaf-Data)2. [Decision Trees](Decision-Trees)3. [Random Forests](Random-Forests)4. [Hyper Parameters](HyperParameters)5. [Using Automated Hyperparamter Selection](Using-Automated-Hyperparamter-Selection)) Load in all required modules (includig some auxillary code) and turn off warnings. Make sure Keras session is clear
###Code
# For readability: disable warnings
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import statistics
import itertools
from subprocess import call
from IPython.display import Image
# plotting libraries
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# sklearn libraries
from sklearn.tree import export_graphviz
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split, RepeatedKFold, cross_val_score
from sklearn.metrics import precision_score, recall_score, roc_auc_score, roc_curve
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import confusion_matrix
###Output
_____no_output_____
###Markdown
Leaf Data 805 observations of bean leaves were taken over 3 separate growing seasons with adequate water and nitrogen (e.g. good conditions). The leaf energy balance depends on shortwave radiation in ($Sw_{in}$), Net longwave ($Lw_{in}$ vs. $Lw_{out}$) and the cooling effects of leaf transpiration ($LE$)This is used to select features used (shown in the table below) | Feature | Description || :--------- | :---------- || Air temperature | 2 cm above the leaf || Leaf temperature | Using a contactless infrared camera || Relative Humidity | Relative Humidity next to the leaf || Photosynthetically active radiation | The part of the spectrum the plant uses || Photosynthetic Efficiency | % of incoming light going to photochemistry || Proton conductivity | Steady state rate of proton flux across membrane || Relative Chlorophyll | Greenness of the leaf || Leaf Thickness | Greenness of the leaf || Leaf Angle | The leaf angle |
###Code
#%% import the data into a pandas dataframe
df = pd.read_csv('data/df_prepped.csv')
# Print some information about the data
print('There are '+ str(len(df)) + ' data entries\n')
print('Min Ambient Temperature is ' + str(df['Ambient Temperature'].values.min())+'\n')
print('Max Ambient Temperature is ' + str(df['Ambient Temperature'].values.max())+'\n')
print('Std Ambient Temperature is ' + str(df['Ambient Temperature'].values.std())+'\n')
print('Min Ambient Humidity is ' + str(df['Ambient Humidity'].values.min())+'\n')
print('Max Ambient Humidity is ' + str(df['Ambient Humidity'].values.max())+'\n')
print('Std Ambient Humidity is ' + str(df['Ambient Humidity'].values.std())+'\n')
print('Min Leaf Temperature Differential is ' + str(df['Leaf Temperature Differential'].values.min())+'\n')
print('Max Leaf Temperature Differential is ' + str(df['Leaf Temperature Differential'].values.max())+'\n')
print('Mean Leaf Temperature Differential is ' + str(df['Leaf Temperature Differential'].values.mean())+'\n')
###Output
_____no_output_____
###Markdown
Summary of weather conditions in the dataThe data is in a fairly narrow range of temperature and relative humidity, but these ranges are typical for this crop breeding station. Leaf Temperature Differential The mean difference between leaf and air temperature is about -4 (°C) meaning the leaves are often cooler than the air .
###Code
#%% calculate required variables
# leaf temperature
df['ltemp'] = df['Ambient Temperature'] + df['Leaf Temperature Differential']
#%% define the target variable and the predictors
# define the variable for prediction
y = df['ltemp']
# define the dataframe of predictor variables
X = df.drop(['ltemp', 'Leaf Temperature Differential'], axis = 1)
# create list of random numbers
rnumbers = list(np.random.randint(1,1000, size=X.shape[0]))
# create dataframe for feature selection purposes only including a column of random numbers
X_features = X
X_features['random_numbers'] = rnumbers
###Output
_____no_output_____
###Markdown
Decision Tree Simple ExampleFirst, we create the features `X_2f` from a subset of our data and the labels `yy`. There are only two features, which will allow us to visualize the data and which makes this a very easy problem.
###Code
# Set random seed to ensure "reproducible" runs
RSEED = 50
# X_2f
X_1f = np.zeros((805,2))
X_1f[:,0] = np.array(X['Ambient Temperature'])
X_1f[:,1] = np.array(X['Ambient Humidity'])
N=800
X_1f=X_1f[0:N,:]
# label classes for simplicty split into two classes low or high
yy=pd.DataFrame()
yy['vals']=y[0:N]
yy['label']=pd.qcut(yy.vals.values, 2, labels=[0, 1])
yy['strlabel']=''
yy.strlabel[yy.label==1]='high'
yy.strlabel[yy.label==0]='low'
###Output
_____no_output_____
###Markdown
Data VisualizationTo get a sense of the data, we can graph some of the data points with the number showing the label.
###Code
# Plot data?
plt.style.use('fivethirtyeight')
plt.rcParams['font.size'] = 18
plt.figure(figsize = (12, 12))
# Plot a subset each point as the label
for x1, x2, label in zip(X_1f[0:20, 0], X_1f[0:20, 1], yy.strlabel[0:20].values):
plt.text(x1, x2, label, fontsize = 34, color = 'g',
ha='center', va='center')
# Plot formatting
plt.grid(None);
plt.xlim((30, 34));
plt.ylim((47, 64));
plt.xlabel('air temp', size = 20); plt.ylabel('humidity', size = 20); plt.title('Leaf Temp', size = 24)
###Output
_____no_output_____
###Markdown
This shows a simple linear classifier will not be able to draw a boundary that separates the classes. The single decision tree will be able to completely separate the points because it essentially draws many repeated linear boundaries between points. A decision tree is a non-parametric model because the number of parameters grows with the size of the data. Decision Trees Here we quickly build and train a single decision tree on the data using Scikit-Learn `sklearn.DecisionTreeClassifier`. The tree will learn how to separate the points, building a flowchart of questions based on the feature values and the labels. At each stage, the decision tree splits by maximizing the reduction in Gini impurity. We'll use the default hyperparameters for the decision tree which means it can grow as deep as necessary in order to completely separate the classes. This will lead to overfitting because the model memorizes the training data, and in practice, we usually want to limit the depth of the tree so it can generalize to testing data.
###Code
# Make a decision tree and train
tree = DecisionTreeClassifier(random_state=RSEED)
###Output
_____no_output_____
###Markdown
Altering max depthOnce you have ran through the next few cells you will see a link back to this cell to investigate the impact of altering the tree depth which you can do by uncommenting the cell below
###Code
# After runnning the follow cells try seeing the effect of limiting the max depth
# uncomment below code - you can play around with altering max depth to see the limits
# tree = DecisionTreeClassifier(max_depth=4, random_state=RSEED)
tree.fit(X_1f, yy.label.values)
print(f'Decision tree has {tree.tree_.node_count} nodes with maximum depth {tree.tree_.max_depth}.')
print(f'Model Accuracy: {tree.score(X_1f, yy.label.values )}')
###Output
_____no_output_____
###Markdown
Without limitting the the depth of the tree the model will have achieved 100% accuracy
###Code
# 30% examples in test data
train, test, train_labels, test_labels = train_test_split(X_1f,yy.label.values ,
stratify = yy.label.values,
test_size = 0.3,
random_state = RSEED)
# Features for feature importances
features = ['Ambient Temperature','Ambient Humidity']
###Output
_____no_output_____
###Markdown
Visualizing the decision treeTo get a sense of how the decision tree "thinks", it's helpful to visualize the entire structure. This will show each node in the tree which we can use to make new predictions. Because the tree is relatively small, we can understand the entire image. We can use `graphviz` to visualise and limit the depth so the tree isn't too large to display **Note graphviz dependency doens't always work out the box so will load example image if this fails** `export_graphviz` takes the `max_depth=n` parameter you may wish to remove this in the commented out command below to see how crazy the the tree now looks (may take a bit longer to produce!)
###Code
# Save tree as dot file
export_graphviz(tree, 'tree_example.dot', rounded = True,
feature_names = features, max_depth=6,
class_names = ['high leaf temp', 'low leaf temp'], filled = True)
# Unlimitted graphviz
# export_graphviz(tree, 'tree_example.dot', rounded = True, feature_names = features, class_names = ['high leaf temp', 'low leaf temp'], filled = True)
# Convert to png
try:
call(['dot', '-Tpng', 'tree_example.dot', '-o', 'tree_example.png', '-Gdpi=200'])
except:
print('graphviz failed')
print('Loaded the max depth limited figure pre-produced incase of graphviz failure \n')
print('If you would like to see the results having limited the max depth please uncomment the last line in this cell')
Image(filename='tree_example.png')
#Image(filename='tree_example_max_depth_4.png')
###Output
_____no_output_____
###Markdown
At each node the decision tree considers a feature based question reducing the Gini impurity Gini Impurity The probability that a randomly selected sample from a node will be incorrectly classified according to the distribution of samples in a node. At each split the tree tries to pick values that reduce the gini impurity, if max depth is not limited we get to 0 for every training point as no limit was set (full tree not shown here)
###Code
# Train tree
tree.fit(train, train_labels)
print(f'Decision tree has {tree.tree_.node_count} nodes with maximum depth {tree.tree_.max_depth}.')
# Will over fit
# Make probability predictions
train_probs = tree.predict_proba(train)[:, 1]
probs = tree.predict_proba(test)[:, 1]
train_predictions = tree.predict(train)
predictions = tree.predict(test)
print(f'Train ROC AUC Score: {roc_auc_score(train_labels, train_probs)}')
print(f'Test ROC AUC Score: {roc_auc_score(test_labels, probs)}')
###Output
_____no_output_____
###Markdown
Evaluating the mode[Receiver Operating Characteristic (ROC) curves](https://medium.com/cascade-bio-blog/making-sense-of-real-world-data-roc-curves-and-when-to-use-them-90a17e6d1db) describe the trade-off between the true positive rate (TPR) and false positive (FPR) rate along different probability thresholds for a classifier.
###Code
def evaluate_model(predictions, probs, train_predictions, train_probs):
"""Compare machine learning model to baseline performance.
Computes statistics and shows ROC curve."""
baseline = {}
baseline['recall'] = recall_score(test_labels, [1 for _ in range(len(test_labels))])
baseline['precision'] = precision_score(test_labels, [1 for _ in range(len(test_labels))])
baseline['roc'] = 0.5
results = {}
results['recall'] = recall_score(test_labels, predictions)
results['precision'] = precision_score(test_labels, predictions)
results['roc'] = roc_auc_score(test_labels, probs)
train_results = {}
train_results['recall'] = recall_score(train_labels, train_predictions)
train_results['precision'] = precision_score(train_labels, train_predictions)
train_results['roc'] = roc_auc_score(train_labels, train_probs)
for metric in ['recall', 'precision', 'roc']:
print(f'{metric.capitalize()} Baseline: {round(baseline[metric], 2)} Test: {round(results[metric], 2)} Train: {round(train_results[metric], 2)}')
# Calculate false positive rates and true positive rates
base_fpr, base_tpr, _ = roc_curve(test_labels, [1 for _ in range(len(test_labels))])
model_fpr, model_tpr, _ = roc_curve(test_labels, probs)
plt.figure(figsize = (8, 6))
plt.rcParams['font.size'] = 16
# Plot both curves
plt.plot(base_fpr, base_tpr, 'b', label = 'baseline')
plt.plot(model_fpr, model_tpr, 'r', label = 'model')
plt.legend();
plt.xlabel('False Positive Rate'); plt.ylabel('True Positive Rate'); plt.title('ROC Curves');
evaluate_model(predictions, probs, train_predictions, train_probs)
###Output
_____no_output_____
###Markdown
If the model line is to the left of the blue it is performing better than random chance Feature ImportancesWe can extract the features considered most important by the Decision Tree. The values are computed by summing the reduction in Gini Impurity over all of the nodes of the tree in which the feature is used, below shows the relative importance assigned to Ambient Temperature vs Ambient humidity
###Code
fi = pd.DataFrame({'feature': features,
'importance': tree.feature_importances_}).\
sort_values('importance', ascending = False)
fi.head()
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Oranges):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
Source: http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.figure(figsize = (10, 10))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title, size = 24)
plt.colorbar(aspect=4)
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45, size = 14)
plt.yticks(tick_marks, classes, size = 14)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
# Labeling the plot
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt), fontsize = 20,
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.grid(None)
plt.tight_layout()
plt.ylabel('True label', size = 18)
plt.xlabel('Predicted label', size = 18)
###Output
_____no_output_____
###Markdown
The function `plot_confusion_matrix` has the ability to normalise the values which may aid in interpretation - you can see this by uncommenting the last line in the cell below
###Code
cm = confusion_matrix(test_labels, predictions)
plot_confusion_matrix(cm, classes = ['High Leaf Temp', 'Low Leaf Temp'],
title = 'Leaf Temp Confusion Matrix')
# plot_confusion_matrix(cm, normalize=True, classes = ['High Leaf Temp', 'Low Leaf Temp'], title = 'Leaf Temp Confusion Matrix')
###Output
_____no_output_____
###Markdown
Limit Maximum DepthIn practice, we usually want to limit the maximum depth of the decision tree (even in a random forest) so the tree can generalize better to testing data. Although this will lead to reduced accuracy on the training data, it can improve performance on the testing data. **Try going back to [Altering Max Depth](Altering-max-depth) and uncommenting the line:** `tree = DecisionTreeClassifier(max_depth=4, random_state=RSEED)` **and re-running the rest of the following cells until this one** The model no longer gets perfect accuracy on the training data. However, it probably would do better on the testing data since we have limited the maximum depth to prevent overfitting. This is an example of the bias - variance tradeoff in machine learning. A model with high variance has learned the training data very well but often cannot generalize to new points in the test set. On the other hand, a model with high bias has not learned the training data very well because it does not have enough complexity. This model will also not perform well on new points.Limiting the depth of a single decision tree is one way we can try to make a less biased model. Another option is to use an entire forest of trees, training each one on a random subsample of the training data. The final model then takes an average of all the individual decision trees to arrive at a classification. This is the idea behind the random forest. Random Forests An ensemble (1000 or 100,000 s) of decision trees, training each tree on a random set of observations and for each node only a subset of features are used and the predictions are averaged to arrive at the final classificationWe're going to use the [scikit-learn RandomForestRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) to set up our model `RandomForestRegressor(max_features, random_state=SEED, n_estimators , max_depth)`This is a slight tweak to allow us to look at continuous values rather than discrete categories as outline in [this article](https://medium.com/swlh/random-forest-and-its-implementation-71824ced454f) if you wish to understand the difference. Random Forest Hyperparameters In our randomforest model defined in the python code below `RandomForestRegressor(max_features = 3, random_state=SEED, n_estimators = 100, max_depth = md)` * **Max features `max_features`** The number of features to consider when looking for the best split. This could be set to N/3 as a quick heuristic for regression (rounding up).* **Max samples :** The proportion of the data set used for bootstrapping. The default is set to using the whole data set for bootstrapping, and is usually a sensible way to go so not passed into our function.* **Number of Trees `n_estimators`:** The number of trees grown in the forest. In general, more trees will result in better performance, which eventually plateaus. Over-fitting is not a danger here.* **Max depth `max_depth`:** The depth of a decision tree. The default is to keep trees unpruned. **NB** `random_state=SEED` is set to make runs reproducible [K-fold cross validation](https://towardsdatascience.com/k-fold-cross-validation-explained-in-plain-english-659e33c0bc0)1. Split the data into k folds 2. Reserve 1 fold for testing and use the remaining n-1 folds for training 3. Repeat the procedure over all k folds4. Average performance evaluation statistics across folds. In this piece of work I use 10 Folds and repeat the whole process 3 times.`RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)` Evaluating the Random Forest * [R-squared](https://en.wikipedia.org/wiki/Coefficient_of_determination) : proportion of variance explained by the model * [RMSE](https://en.wikipedia.org/wiki/Mean_squared_error): standard deviation of the prediction errors (residuals) The sensitivity steps might take a few mins to run depending on computure performance
###Code
# conduct sensitivity test on max depth
# seed
SEED = 1
# range of tree depths
mds = np.arange(1,30)
# list for evaluation
r2_sens_lst_md = []
rmse_sens_lst_md = []
# loop over all values for maximum depth between 1 and 30, storing the r2 and rmse for 10 fold cross validation
for i in range(len(mds)):
md = mds[i]
rf_sens = RandomForestRegressor(max_features = 3, random_state=SEED, n_estimators = 100, max_depth = md)
cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)
r2s_sens = list(cross_val_score(rf_sens, X, y, scoring='r2', cv=cv, n_jobs=-1, error_score='raise'))
rmses_sens = list(cross_val_score(rf_sens, X, y, scoring='neg_root_mean_squared_error', cv=cv, n_jobs=-1, error_score='raise'))
r2_sens = statistics.mean(r2s_sens)
rmse_sens = statistics.mean(rmses_sens)
r2_sens_lst_md.append(r2_sens)
rmse_sens_lst_md.append(rmse_sens)
# max depth dataframe
df_sens_md = pd.DataFrame({'max_depth' : list(mds), 'r2' : r2_sens_lst_md, 'rmse' : rmse_sens_lst_md})
df_sens_md['rmse'] = df_sens_md['rmse']*-1
# scatterplot for maximum depth vs.performance metric (change 'r2' for 'rmse' to replicate the plots in the presentation)
sns.scatterplot(x = 'max_depth', y = 'r2', data = df_sens_md)
plt.title('max depth vs. r2')
###Output
_____no_output_____
###Markdown
Sensitivity Testing for Maximum Depth Past a maximum tree depth of approx. 25, there is no change to either the variance explained or the bias.For this reason, it is useful to prune the depth of the random forest to this value to save computation time.
###Code
#%% conduct sensitivity test on number of parameters to split
# range of parameters
mxf = np.arange(1,9)
# list for evaluation
r2_sens_lst_mxf = []
rmse_sens_lst_mxf = []
# loop over all values for maximum number of features used in splitting between 1 and 9, storing the r2 and rmse for 10 fold cross validation
for i in range(len(mxf)):
mx = mxf[i]
rf_sens = RandomForestRegressor(max_features = mx, random_state=SEED, n_estimators = 100)
cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)
r2s_sens = list(cross_val_score(rf_sens, X, y, scoring='r2', cv=cv, n_jobs=-1, error_score='raise'))
rmses_sens = list(cross_val_score(rf_sens, X, y, scoring='neg_root_mean_squared_error', cv=cv, n_jobs=-1, error_score='raise'))
r2_sens = statistics.mean(r2s_sens)
rmse_sens = statistics.mean(rmses_sens)
r2_sens_lst_mxf.append(r2_sens)
rmse_sens_lst_mxf.append(rmse_sens)
# max depth dataframe
df_sens_mxf = pd.DataFrame({'max_features' : list(mxf), 'r2' : r2_sens_lst_mxf, 'rmse' : rmse_sens_lst_mxf})
df_sens_mxf['rmse'] = df_sens_mxf['rmse']*-1
# scatterplot for maximum number of features vs.performance metric (change 'r2' for 'rmse' to replicate the plots in the presentation)
sns.scatterplot(x = 'max_features', y = 'r2', data = df_sens_mxf)
plt.title('max features vs. r2')
###Output
_____no_output_____
###Markdown
Sensitivity Testing for Maximum Features in SplitsPast a maximum features of 5, there is very little increase in variance explained or improvement in model bias. Therefore Max features of 5 is selected.
###Code
#%% Evaluate the performance of the Random Forest Algorithm with chosen parameters from the sensitivity analysis using cross validation
# instantiate the random forest regressor
rf = RandomForestRegressor(max_features = 5, random_state=SEED, n_estimators = 100, max_depth = 25)
# set up the cross validation
cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1)
###Output
_____no_output_____
###Markdown
**a note on the `scoring='neg_root_mean_squared_error'` parameter**The keen eyed will spot something odd with the idea of a negative square root! The reason for this negative sign is because cross_val_score() reports scores in ascending order (largest score is best). But RMSE is naturally descending scores (the smallest score is best). Thus we need to use ‘neg_mean_squared_error’ to invert the sorting. This also results in the score to be negative even though the value can never be negative.The actual RMSE is simply -1 x NegRMSE
###Code
# calculate the metrics for each kfold cross validation
r2s = list(cross_val_score(rf, X, y, scoring='r2', cv=cv, n_jobs=-1, error_score='raise'))
rmses = list(cross_val_score(rf, X, y, scoring='neg_root_mean_squared_error', cv=cv, n_jobs=-1, error_score='raise'))
# take the mean of these
r2 = statistics.mean(r2s)
rmse = statistics.mean(rmses)
print('mean r2 = '+ str(r2))
print('mean rmse = '+ str(-1*rmse))
###Output
_____no_output_____
###Markdown
Predictive ability The model is able to explain a useful share of variance explained (mean = 0.77) with a reasonably low bias (mean = 1.50)This suggests that expanded versions of this model may be a useful sub-module in land surface / crop growth models. Feature importanceClearly Ambient Temperature and Ambient Humidity are the most important features for prediction. However, features like photosynthetic efficiency, membrane conductance and relative chlorophyll also contributed. Their importance was a lot greater than a column of random numbers. Many of these quantities are available from satellites, so this kind of model may be applicable across spatial scales.
###Code
#%% calculate the feature importances
# fit the random forest regressor with the dataframe of predictor variables that includes the column of random numbers for comparison
rf.fit(X_features, y)
# feature importances
importance = rf.feature_importances_
# features
features = list(X_features.columns)
# dataframe for plotting
df_features = pd.DataFrame(list(zip(importance, features)),
columns =['importance', 'features'])
#%% plot the feature importances
sns.barplot(x = 'importance', y = 'features', data = df_features)
#%% compose the statistics for each cross validation effort in a dataframe for plotting.
df_eval = pd.DataFrame({'r2' : r2s, 'rmse' : rmses})
df_eval['rmse'] = df_eval['rmse']*-1
#%% plot the statistics
# replace r2 with rmse to reproduce the plots in the presentation
sns.boxplot(x ='r2', data = df_eval)
plt.title('10 fold cross validation repeated 3 times')
###Output
_____no_output_____
###Markdown
Using Automated Hyperparamter SelectionWe can use `RandomizedSearchCV` to search a parameter grid to search for what hyparameters will create the best perfroming model * increasing `n_iter` may increase performace but will be slower to run* `cv = 10` is a 10 fold cross validation* `scoring = 'r2'` scores based on r squared more information on this can be found [here](https://scikit-learn.org/stable/modules/grid_search.html)
###Code
# Estimator for use in random search
estimator = RandomForestRegressor( random_state=SEED)
# Hyperparameter grid
param_grid = {
'n_estimators': np.linspace(10, 200).astype(int),
'max_depth': [None] + list(np.linspace(3, 20).astype(int)),
'max_features': ['auto', 'sqrt', None] + list(np.arange(0.5, 1, 0.1)),
'max_leaf_nodes': [None] + list(np.linspace(10, 50, 500).astype(int)),
'min_samples_split': [2, 5, 10],
'bootstrap': [True, False]
}
# Create the random search model
rs = RandomizedSearchCV(estimator, param_grid, n_jobs = -1,
scoring = 'r2', cv = 10,
n_iter = 50, verbose = 1, random_state=RSEED)
# Fit
rs.fit(train, train_labels)
# Took several mins
# Now we can access the best hyper parameteres
rs.best_params_
###Output
_____no_output_____
###Markdown
This produces slightly different hyperparameters that our sensitivity testing
###Code
best_model = rs.best_estimator_
# calculate the metrics for each kfold cross validation
r2s = list(cross_val_score(best_model, X, y, scoring='r2', cv=cv, n_jobs=-1, error_score='raise'))
rmses = list(cross_val_score(best_model, X, y, scoring='neg_root_mean_squared_error', cv=cv, n_jobs=-1, error_score='raise'))
# take the mean of these
r2 = statistics.mean(r2s)
rmse = statistics.mean(rmses)
print('mean r2 = '+ str(r2))
print('mean rmse = '+ str(rmse))
#%% calculate the feature importances
# fit the random forest regressor with the dataframe of predictor variables that includes the column of random numbers for comparison
best_model.fit(X_features, y)
# feature importances
importance = best_model.feature_importances_
# features
features = list(X_features.columns)
# dataframe for plotting
df_features = pd.DataFrame(list(zip(importance, features)),
columns =['importance', 'features'])
#%% plot the feature importances
sns.barplot(x = 'importance', y = 'features', data = df_features)
###Output
_____no_output_____
###Markdown
The best model selected here uses less features but the broad pattern remains the same
###Code
#%% compose the statistics for each cross validation effort in a dataframe for plotting.
df_eval = pd.DataFrame({'r2' : r2s, 'rmse' : rmses})
df_eval['rmse'] = df_eval['rmse']*-1
#%% plot the statistics
# replace r2 with rmse to reproduce the plots in the presentation
sns.boxplot(x ='r2', data = df_eval)
plt.title('10 fold cross validation repeated 3 times')
###Output
_____no_output_____ |
ode-modeling1-instructor.ipynb | ###Markdown
Modeling Gene Networks Using Ordinary Differential EquationsAuthor: Paul M. Magwene Date: February 2016- - - To gain some intuition for how systems biologists build mathematical models of gene networks we're going to use computer simulations to explore the dynamical behavior of simple transcriptional networks.In each of our simulations we will keep track of the the concentration of a different genes of interest as they change over time. The basic approach we will use to calculate changes in the quantity of different molecules are differential equations, which are simply a way of describing the instanteous change in a quantity of interest. All of our differential equations will be of this form:\begin{eqnarray*}\frac{dY}{dt} = \mbox{rate of production} - \mbox{rate of decay}\end{eqnarray*}To state this in words -- the amount of gene $Y$ changes over time is a function of two things: 1) a growth term which represents the rate at which the gene is being transcribed and translated; and 2) a decay term which gives the rate at which $Y$ trascsripts and protein are being degraded. In general we will assume that the "rate of production" is a function of the concentration of the genes that regulate $Y$(i.e. it's inputs in the transcriptional network), while the "rate of decay" is a proportional to the amount of $Y$ that is present. So the above formula will take the following structure:$$\frac{dY}{dt} = f(X_1, X_2, \ldots) - \alpha Y$$The $f(X_1, X_2, \ldots)$ term represents the growth term and is a function of the transcription factors that regulate $Y$. The term, $\alpha Y$ represents the rate at which $Y$ is being broken down or diluted. Notice that the decay rate is a proportional to the amount of $Y$ that is present. If $\frac{dy}{dt}$ is positive than the concentration of gene $Y$ is increasing, if $\frac{dy}{dt}$ is negative the concentration of $Y$ is decreasing, and if $\frac{dy}{dt} = 0$ than $Y$ is at steady state. Modeling the rate of production with the Hill FunctionAn appropriate approach for modeling the rate of production of a protein, $Y$, as a function of it's inputs, $X_1, X_2,..$, is a with the "Hill Function". The Hill Function for a single transcriptional activator is:$$f(X) = \frac{\beta X^n}{K^n + X^n}$$$X$ represents the concentration of a transcriptional activator and $f(X)$ represents the the combined transcription and translation of the gene $Y$ that is regulated by $X$. Modeling transcriptional activationWrite a Python function to represent transcriptional activation based on the Hill function given above:
###Code
# import statements to make numeric and plotting functions available
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
## define your function in this cell
def hill_activating(X, B, K, n):
Xn = X**n
return (B * Xn)/(K**n + Xn)
## generate a plot using your hill_activating function defined above
# setup paramters for our simulation
B = 5
K = 10
x = linspace(0,30,200) # generate 200 evenly spaced points between 0 and 30
y = hill_activating(x, B, K, 1) # hill fxn with n = 1
plot(x, y, label='n=1')
xlabel('Concentration of X')
ylabel('Promoter activity')
legend(loc='best')
ylim(0, 6)
pass
###Output
_____no_output_____
###Markdown
In class exercise Following the example above, generate plots for the Hill function where $n = {1, 2, 4, 8}$. Note that you can generate multiple curves in the same figure by repeatedly calling the plot function.
###Code
## generate curves for different n here
# setup paramters for our simulation
B = 5
K = 10
x = linspace(0,30,200) # generate 200 evenly spaced points between 0 and 30
y1 = hill_activating(x, B, K, 1) # hill fxn with n = 1
y2 = hill_activating(x, B, K, 2) # hill fxn with n = 2
y4 = hill_activating(x, B, K, 4) # hill fxn with n = 4
y8 = hill_activating(x, B, K, 8) # hill fxn with n = 8
plot(x, y1, label='n=1')
plot(x, y2, label='n=2')
plot(x, y4, label='n=4')
plot(x, y8, label='n=8')
xlabel('Concentration of X')
ylabel('Rate of production of Y')
legend(loc='best')
ylim(0, 6)
pass
###Output
_____no_output_____
###Markdown
Transcriptional repressionIf rather than stimulating the production of $Y$, $X$ "represses" $Y$, we can write the corresponding Hill function as:$$f(X) = \frac{\beta}{1 + (X/K)^n}$$Remember that both of these Hill functions (activating and repressing) describe the production of $Y$ as a function of the levels of $X$, *not* the temporal dynamics of $Y$ which we'll look at after developing a few more ideas. Modeling transcriptional repressionWrite a function to represent *transcriptional repression*, using the repressive Hill function given above:
###Code
## define your repressive hill function in this cell
def hill_repressing(X, B, K, n):
return B/(1.0 + (X/K)**n)
## generate a plot using your hill_activating function defined above
## For X values range from 0 to 30
B = 5
K = 10
x = linspace(0,30,200)
plot(x, hill_repressing(x, B, K, 1), label='n=1')
plot(x, hill_repressing(x, B, K, 2), label='n=2')
plot(x, hill_repressing(x, B, K, 4), label='n=4')
plot(x, hill_repressing(x, B, K, 8), label='n=8')
xlabel('Conc. of X')
ylabel('Rate of production of Y')
legend(loc='best')
ylim(0, 6)
pass
###Output
_____no_output_____
###Markdown
Interactive exploration of the Hill function Download the file `hill-fxn.py` and `pnsim.py` from the course website. Run this application from your terminal with the command: `python hill-fxn.py`.- Run the `hill-fxn.py` script from your terminal by typing `python hill-fxn.py`. - There are three sliders at the bottom of the application window. You can drag the blue regions of these sliders left or right to change the indicated parameter values. The exact values of each parameter are shown to the right of the sliders. As you drag the sliders the plot will update to show you what the Hill function looks like for the combination of parameters you have currently specified.- Also note there is a dashed vertical line in the plot window. When you move your mouse over the plot window this line will follow your position. As you do so, x- and y-plot values in the lower left of the application window will update to show you the exact position your mouse is pointing to in the plot. The dashed line and the plot readout are useful for reading values off the plot. Homework 1: Use the `hill-fxn.py` script to answer the following questions 1. Vary the parameter $n$ over the range 1 to 10. - a) Describe what happens to the shape of the plot. - b) How does changing $n$ change the maximum (or asymptotic maximum) promoter activity ($V_{max}$)? - c) At what value of activator concentration is half of the maximum promoter activity reached?2. Vary the parameter $\beta$. How does changing $\beta$ change: - a) the shape of the plot? - b) the maximum promoter activity? - c) the activator concentration corresponding to half-maximal promoter activity?3. Vary the parameter $K$. How does changing $K$ change: - a) the shape of the plot? - b) the maximum promoter activity? - c) the activator concentration corresponding to half-maximal promoter activity?4. Download and run the script `hill-fxn-wlogic.py` -- This is like the previous `hill-fxn.py` script except it now include a set of buttons for toggling the logic approximation on and off. As before vary the parameters $n$, $\beta$ and $K$. - a) When is the logic approximation a good approximation to the Hill function? Simplifying Models using Logic ApproximationsTo simplify analysis it's often convenient to approximate step-like sigmoidal functions like those produced by the Hill equation with functions using logic approximations. We'll assume that when the transcription factor, $X$, is above a threshold, $K$, then gene $Y$ is transcribed at a rate, $\beta$. When $X$ is below the threshold, $K$, gene $Y$ is not be transcribed. To represent this situation, we can rewrite the formula for $Y$ as:$$f(X) = \beta\ \Theta(X > K)$$where the function $\Theta$ is zero if the statement inside the parentheses is false or one if the statement is true. An alternate way to write this is:$$f(X) = \begin{cases} 0, &\text{if $X > K$;} \\ \beta, &\text{otherwise.}\end{cases}$$When $X$ is a repressor we can write:$$f(X) = \beta\ \Theta(X < K)$$ Python functions for the logic approximationWrite Python functions to represent the logic approximations for activation and repression as given above:
###Code
## write your logic approximation functions here
def logic_activating(X, B, K):
if X > K:
theta = 1
else:
theta = 0
return B*theta
def logic_repressing(X, B, K):
if X < K:
theta = 1
else:
theta = 0
return B*theta
###Output
_____no_output_____
###Markdown
Generate functions comparing the logic approximation to the Hill function, for the activating case:
###Code
## generate plots using your hill_activating and logic_activating functions defined above
## For X values range from 0 to 30
B = 5
K = 10
n = 4
x = linspace(0, 30, 200)
plot(x, hill_activating(x, B, K, n), label='n=8')
logicx = [logic_activating(i, B, K) for i in x]
plot(x, logicx, label='logic approximation')
xlabel('Concentration of X')
ylabel('Promoter activity')
ylim(-0.1, 5.5)
legend(loc='best')
pass
###Output
_____no_output_____
###Markdown
Multi-dimensional Input FunctionsWhat if a gene needs two or more activator proteins to be transcribed? We can describe the amount of $Z$ transcribed as a function of active forms of $X$ and $Y$ with a function like:$$ f(X,Y) = \beta\ \Theta(X > K_x \land Y > K_y)$$The above equation describes "AND" logic (i.e. *both* X and Y have to be above their threshold levels, $K_x$ and $K_y$, for Z to be transcribed). In a similar manner we can define "OR" logic:$$f(X,Y) = \beta\ \Theta(X > K_x \lor Y > K_y)$$A SUM function would be defined like this:$$f(X,Y) = \beta_x \Theta(X > K_x) + \beta_y \Theta (Y > K_y)$$ Modeling changes in network components over timeUp until this point we've been considering how the *rate of production* of a protein $Y$ changes with the concentration of a transcriptional activator/repressor that regulates $Y$. Now we want to turn to the question of how the absolute amount of $Y$ changes over time.As we discussed at the beginning of this notebook, how the amount of $Y$ changes over time is a function of two things: 1) a growth term which represents the rate of production of $Y$; and 2) a decay term which gives the rate at which $Y$ is degraded. A differential equation describing this as follows:$$\frac{dY}{dt} = f(X_1, X_2, \ldots) - \alpha Y$$The $f(X_1, X_2, \ldots)$ term represents the growth term and is a function of the transcription factors that regulate $Y$. We've already seen a couple of ways to model the rate of producting -- using the Hill function or its logic approximation. For the sake of simplicity we'll use the logic approximation to model the growth term. For example, in the case $Y$ is regulated by a single input we might use $f(X) = \beta \theta(X > K_1)$. For the equivalent function where $Y$ was regulated by two transcription factor, $X_1$ and $X_2$, and both are required to be above the respective threshold, we could use the function $f(X_1, X_2) = \beta \theta (X_1 > K_1 \land X_2 > K_2)$. The second term, $\alpha Y$ represents the rate at which $Y$ is being broken down or diluted. Notice that the decay rate is a proportional to the amount of $Y$ that is present. Change in concentration under constant activationNow let's explore a simple model of regulation for the two gene network, $X \longrightarrow Y$. Here we assume that at time 0 the activator, $X$, rises above the threshold, $K$, necessary to induce transcription of $Y$ at the rate $\beta$. $X$ remains above this threshold for the entire simulation. Therefore, we can write $dY/dt$ as:$$\frac{dY}{dt} = \beta - \alpha Y$$Write a Python function to represent the change in $Y$ in a given time increment, under this assumption of constant activation:
###Code
## write a function to represent the simple differential equation above
def dYdt(B,a,Y):
return B - a*Y # write your code here
## generate a plot using your dY function defined above
## Evaluated over 200 time units
Y = [0] # initial value of Y
B = 0.2
a = 0.05
nsteps = 200
for i in range(nsteps):
deltay = dYdt(B, a, Y[-1])
ynew = Y[-1] + deltay
Y.append(ynew)
plot(Y)
ylim(0, 4.5)
xlabel('Time units')
ylabel('Concentration of Y')
pass
###Output
_____no_output_____
###Markdown
Toggling the activator XIn the proceeding example the activator $X$ was on at the beginning of the simulation and just stayed on. Let's see what happens when $X$ has pulsatile dynamics. This would be akin to toggling $X$ on then off, and asking what happens to $Y$.
###Code
# setup pulse of X
# off (0) for first 50 steps, on for next 100 steps, off again for last 100 steps
X = [0]*50 + [1]*100 + [0]*100
Y = [0]
B = 0.2
K = 0.5
a = 0.05
nsteps = 250
for i in range(1, nsteps):
xnow = X[i]
growth = logic_activating(xnow, B, K)
decay = a*Y[-1]
deltay = growth - decay
ynew = Y[-1] + deltay
Y.append(ynew)
plot(X, color='red', linestyle='dashed', label="X")
plot(Y, color='blue', label="Y")
ylim(0, 4.5)
xlabel('Time units')
ylabel('Concentration')
legend(loc="best")
pass
###Output
_____no_output_____ |
notebooks/part-2-geopandas.ipynb | ###Markdown
Introduction to GeoPandas Table of Contents1. [Points, lines and polygons](GeoDataFrames)2. [Spatial relationships](spatial)3. [London boroughs](boroughs) 3.1. [Load geospatial data](load1) 3.2. [Explore data](explore1)4. [Open Street Map data (OSM)](osm) 4.1. [Load data](load2) 4.2. [Explore data](explore2) If you are using Watson Studio to run the workshop you will need to add the project token to your notebook that you created earlier to be able to access the shape files from your Cloud Object Store (COS). Click the 3 dots at the top right side of the notebook to insert the project token. This will create a new cell in the notebook that you will need to run first before continuing with the rest of the notebook that contains your COS credentials. If you are sharing this notebook you should remove this cell, else anyone can use your COS from this project.If you cannot find the new cell it is probably at the top of this notebook. Scroll up, run the cell and continue with the rest of the notebook below. Installing geopandasgeopandas has many dependencies with other packages, so be careful!* [geopandas installation instructions](https://geopandas.readthedocs.io/en/latest/getting_started/install.html)* [geoplot installation instructions](https://residentmario.github.io/geoplot/installation.html)
###Code
!time conda install --freeze-installed mapclassify descartes geopandas
!time pip install geoplot
import pandas as pd
import geopandas as gpd
from shapely.geometry import Point, LineString, Polygon
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Points, lines and polygonsA [`GeoDataSeries`](http://geopandas.org/data_structures.html) is a vector where each row is a set of shapes corresponding to one observation. A row may consist of only one shape (like a single polygon) or multiple shapes that are meant to be thought of as one observation.A `GeoDataFrame` is very similar to a Pandas `DataFrame`, but has an additional column with the shape or `geometry`. You can load a file, or create your own `GeoDataFrame`. Below the latitude and longitude of 5 cities are used to create a `POINT` geometry variable that is used to create a `GeoDataFrame` from a `DataFrame`:
###Code
df = pd.DataFrame({'city': ['London','Manchester','Birmingham','Leeds','Glasgow'],
'population': [9787426, 2553379, 2440986, 1777934, 1209143],
'area': [1737.9, 630.3, 598.9, 487.8, 368.5 ],
'latitude': [51.50853, 53.48095, 52.48142, 53.79648,55.86515],
'longitude': [-0.12574, -2.23743, -1.89983, -1.54785,-4.25763]})
df['geometry'] = list(zip(df.longitude, df.latitude))
df['geometry'] = df['geometry'].apply(Point)
cities = gpd.GeoDataFrame(df, geometry='geometry')
cities.head()
list(zip(df.longitude, df.latitude))
###Output
_____no_output_____
###Markdown
Creating a basic map from this data is similar to creating a plot from a Pandas DataFrame by using `.plot()`. Below the column name defines what to use for the colours in the map. (We will come back to creating and editing more maps later)
###Code
cities.plot(column='population');
###Output
_____no_output_____
###Markdown
As `cities` is still a DataFrame you can apply the same data manipulations, for instance:
###Code
cities['population'].mean()
cities['area'].min()
cities['density'] = cities['population']/cities['area']
cities
###Output
_____no_output_____
###Markdown
But there are additional methods you can use (from the [geopandas documentation](http://geopandas.org/data_structures.htmloverview-of-attributes-and-methods)): Attributes* `area`: shape area* `bounds`: tuple of max and min coordinates on each axis for each shape* `total_bounds`: tuple of max and min coordinates on each axis for entire GeoSeries* `geom_type`: type of geometry* `is_valid`: tests if coordinates make a shape that is reasonable geometric shape Basic Methods* `distance(other)`: returns Series with minimum distance from each entry to other* `centroid`: returns GeoSeries of centroids* `representative_point()`: returns GeoSeries of points that are guaranteed to be within each geometry. It does NOT return centroids* `to_crs()`: change coordinate reference system* `plot()`: plot GeoSeries Relationship Tests* `geom_almost_equals(other)`: is shape almost the same as other (good when floating point precision issues make shapes slightly different)* `contains(other)`: is shape contained within other* `intersects(other)`: does shape intersect otherWe can explore a few of these with the cities data:
###Code
cities.area
cities.total_bounds
cities.geom_type
cities.distance(cities.geometry[0])
###Output
_____no_output_____
###Markdown
For the other attributes and methods we need some more data. * A line between 2 cities by squeezing out the geometry and then creating a LineString* Circles around the cities by adding a buffer around the points
###Code
london = cities.loc[cities['city'] == 'London', 'geometry'].squeeze()
manchester = cities.loc[cities['city'] == 'Manchester', 'geometry'].squeeze()
line = gpd.GeoSeries(LineString([london, manchester]))
line.plot();
cities2 = cities.copy()
cities2['geometry'] = cities2.buffer(1)
cities2 = cities2.drop([1, 2])
cities2.head()
cities2.geometry[0]
cities2.plot();
###Output
_____no_output_____
###Markdown
And plot all of them together:
###Code
base = cities2.plot(color='lightblue', edgecolor='black')
cities.plot(ax=base, marker='o', color='red', markersize=10);
line.plot(ax=base);
###Output
_____no_output_____
###Markdown
Polygons can be of any shape as you will see later in the workshop, using circles here as a quick example. Polygons can contain holes. Let's subtract a small circle from three larger ones to see what that looks like:
###Code
cities3 = cities.copy()
cities3['geometry'] = cities3.buffer(2)
cities3 = cities3.drop([1, 2])
gpd.overlay(cities3, cities2, how='difference').plot();
###Output
_____no_output_____
###Markdown
With these new shapes let's explore some more methods:
###Code
cities2.area
cities2.bounds
cities2.centroid
cities3.representative_point()
###Output
_____no_output_____
###Markdown
2. Spatial relationshipsWhat can you do with geospatial relationships? There are several functions to check geospatial relationships between geometries: `equals`, `contains`, `crosses`, `disjoint`,`intersects`,`overlaps`,`touches`,`within` and `covers`. These all use the `shapely` package about which you can read more [here](https://shapely.readthedocs.io/en/stable/manual.htmlpredicates-and-relationships) and some more background on spatial relationships [here](https://en.wikipedia.org/wiki/Spatial_relation).A few examples:
###Code
cities2.head()
cities.head()
cities2.contains(cities.geometry[0])
cities2.contains(london)
cities2[cities2.contains(london)]
cities2[cities2.contains(manchester)]
###Output
_____no_output_____
###Markdown
The inverse of `contains`:
###Code
cities[cities.within(cities2)]
cities2.intersects(line)
cities2[cities2.crosses(line)]
cities2[cities2.disjoint(london)]
###Output
_____no_output_____
###Markdown
3. London boroughs 3.1 Load geospatial dataGeospatial data comes in many formats, but with GeoPandas you can read most files with just one command. For example this geojson file with the London boroughs:
###Code
# load data from a url
boroughs = gpd.read_file("https://skgrange.github.io/www/data/london_boroughs.json")
boroughs.head()
###Output
_____no_output_____
###Markdown
3.2 Explore data
###Code
boroughs.plot();
###Output
_____no_output_____
###Markdown
Adding a column will colour the map based on the classes in this column:
###Code
boroughs.plot(column='code');
boroughs.plot(column='area_hectares');
###Output
_____no_output_____
###Markdown
The boroughs are made up of many districts that you might want to combine. For this example this can be done by adding a new column and then use `.dissolve()`:
###Code
boroughs['all'] = 1
allboroughs = boroughs.dissolve(by='all',aggfunc='sum')
allboroughs.head()
allboroughs.plot();
###Output
_____no_output_____
###Markdown
To change the size of the map and remove the box around the map, run the below:
###Code
[fig, ax] = plt.subplots(1, figsize=(10, 6))
allboroughs.plot(ax=ax);
ax.axis('off');
###Output
_____no_output_____
###Markdown
Let's combine the data from the pandas notebook with the boroughs GeoDataFrame:
###Code
df = pd.read_csv('https://raw.githubusercontent.com/IBMDeveloperUK/crime-data-workshop/master/data/london-borough-profiles.csv',encoding = 'unicode_escape')
df.head()
boroughs.head()
###Output
_____no_output_____
###Markdown
The columns to join the two tables on are `code` and `Code`. To use the [`join` method](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html), first the index of both tables has to be set to this column. The below adds the columns from `df` to `boroughs`:
###Code
boroughs = boroughs.set_index('code').join(df.set_index('Code'))
boroughs.head()
###Output
_____no_output_____
###Markdown
EXERCISES Create a map that shows two regions: Inner and Outer London. Create a map of the average gender pay gap for each borough. Create a map or maps with the columns that you are curious about. Tip: you can pick any of the color maps from [here](https://matplotlib.org/3.1.1/gallery/color/colormap_reference.html) to use in your maps. **Create a map that shows two regions: Inner and Outer London**
###Code
# your answer
# %load https://raw.githubusercontent.com/IBMDeveloperUK/Python-Geopandas-Workshop/master/answers/geo-answer1.py
# %load https://raw.githubusercontent.com/IBMDeveloperUK/Python-Geopandas-Workshop/master/answers/geo-answer2.py
###Output
_____no_output_____
###Markdown
**Create a map of the average gender pay gap for each borough**
###Code
# your answer
# %load https://raw.githubusercontent.com/IBMDeveloperUK/Python-Geopandas-Workshop/master/answers/geo-answer3.py
###Output
_____no_output_____
###Markdown
**Create a map or maps with the columns that you are curious about**Your time to play!
###Code
# your maps
###Output
_____no_output_____
###Markdown
4. Open Street Map data (OSM) 4.1 Load OSM dataAs the raw data file is large, the [Open Street Map data](http://download.geofabrik.de/europe/great-britain.html) is pre-processed in the last few cells of this [notebook](https://github.com/IBMDeveloperUK/Python-Geopandas-Workshop/blob/master/notebooks/prepare-uk-crime-data.ipynb):```pythonbounding_box = boroughs.envelopebb = gpd.GeoDataFrame(gpd.GeoSeries(bounding_box), columns=['geometry'])london2 = london.drop([0, 0])xmin, ymin, xmax, ymax = london2.total_boundspois_all = gpd.read_file("data/england-latest-free/gis_osm_pois_free_1.shp")pois = pois_all.cx[xmin:xmax, ymin:ymax]pois.to_file("data/london_pois.shp")```Data is downloaded from http://download.geofabrik.de/europe/great-britain.html and a more detailed decription of the data is [here](http://download.geofabrik.de/osm-data-in-gis-formats-free.pdf).The data format is a shape file that consists of several files combined into one zip file that can be read directly with GeoPandas. Go to the workshop repo and download the data file to your local computer.* Download the repository by clicking on the green button on the [main page](https://github.com/IBMDeveloperUK/Python-Geopandas-Workshop)* Or download only the file from the [data folder](https://github.com/IBMDeveloperUK/Python-Geopandas-Workshop/tree/master/data) by right-clicking on it and then selecting **save link as...** To load the data into this notebook, first store the file london_pois.zip in your Cloud Object Store (COS) through the menu at the right of the notebook (if you see no menu, click the 1010 button at the top first). Then load the data into the notebook by running the following two cells:
###Code
# define the helper function
def download_file_to_local(project_filename, local_file_destination=None, project=None):
"""
Uses project-lib to get a bytearray and then downloads this file to local.
Requires a valid `project` object.
Args:
project_filename str: the filename to be passed to get_file
local_file_destination: the filename for the local file if different
Returns:
0 if everything worked
"""
project = project
# get the file
print("Attempting to get file {}".format(project_filename))
_bytes = project.get_file(project_filename).read()
# check for new file name, download the file
print("Downloading...")
if local_file_destination==None: local_file_destination = project_filename
with open(local_file_destination, 'wb') as f:
f.write(bytearray(_bytes))
print("Completed writing to {}".format(local_file_destination))
return 0
download_file_to_local('london_pois.zip', project=project)
pois = gpd.read_file("zip://./london_pois.zip")
pois.head()
###Output
_____no_output_____
###Markdown
4.2 Explore OSM data
###Code
pois.size
pois['fclass'].unique()
pois.plot(column='fclass');
###Output
_____no_output_____
###Markdown
Let's count and plot the number of pubs by borough by:* checking the coordinate systems of the maps to combine. They need to be the same to use them together.* extracting the pubs from the `pois` DataFrame* joining the tables into a temporary table* counting the number of pubs in each borough* merging this new table back into the `boroughs` DataFrame
###Code
pois[pois.fclass=='pub'].plot(column='fclass');
###Output
_____no_output_____
###Markdown
The coordinate reference system (CRS) determines how the two-dimensional (planar) coordinates of the geometry objects should be related to actual places on the (non-planar) earth.
###Code
print(pois.crs)
print(boroughs.crs)
pubs = pois[pois['fclass']=='pub']
pubs.head()
pubs2 = gpd.sjoin(boroughs,pubs)
pubs2.head()
pubs3 = pd.pivot_table(pubs2,index='name_left',columns='fclass',aggfunc={'fclass':'count'})
pubs3.columns = pubs3.columns.droplevel()
pubs3 = pubs3.reset_index()
pubs3.head()
boroughs = boroughs.merge(pubs3, left_on='name',right_on='name_left')
boroughs = boroughs.drop(columns='name_left')
boroughs.head()
[fig,ax] = plt.subplots(1, figsize=(12, 8))
boroughs.plot(column='pub',cmap='Blues', edgecolor='black', linewidth=0.5,
legend=True, ax=ax, scheme='equal_interval');
ax.axis('off');
ax.set_title('Pubs in London');
###Output
_____no_output_____
###Markdown
A different way to visualize this is with a heatmap:
###Code
import geoplot
[fig,ax] = plt.subplots(1, figsize=(12, 8))
geoplot.kdeplot(
pubs, clip=boroughs.geometry, n_levels=10,
shade=True, cmap='Greens', ax=ax)
geoplot.polyplot(boroughs, ax=ax, alpha=1, edgecolor='black', linewidth=0.5)
###Output
_____no_output_____
###Markdown
EXERCISE Explore the data further with GeoPandas. Some suggestions of what to look at: Create a map that only shows all points of one of the POI classes for one of the boroughs. Add another POI class to the boroughs table. Are the columns in the borough table related to any of the POI classes. Combine the data and try making maps, scatter plots or bar charts to find out. **Create a map that only shows all points of one of the POI classes for one of the boroughs**
###Code
# your code
# %load https://raw.githubusercontent.com/IBMDeveloperUK/Python-Geopandas-Workshop/master/answers/geo-answer4.py
###Output
_____no_output_____
###Markdown
**Add another POI class to the boroughs table**
###Code
# your code
# %load https://raw.githubusercontent.com/IBMDeveloperUK/Python-Geopandas-Workshop/master/answers/geo-answer5.py
###Output
_____no_output_____
###Markdown
**Are the columns in the borough table related to any of the POI classes. Combine the data and try making maps, scatter plots or bar charts to find out** Your turn to play! There are no right or wrong answers when exploring new data
###Code
# your code
###Output
_____no_output_____ |
exercises/pick/rigid_transforms.ipynb | ###Markdown
Exercises on Rigid Transforms Notebook Setup The following cell will install Drake, checkout the manipulation repository, and set up the path (only if necessary).- On Google's Colaboratory, this **will take approximately two minutes** on the first time it runs (to provision the machine), but should only need to reinstall once every 12 hours. More details are available [here](http://manipulation.mit.edu/drake.html).
###Code
import importlib
import os, sys
from urllib.request import urlretrieve
if 'google.colab' in sys.modules and importlib.util.find_spec('manipulation') is None:
urlretrieve(f"http://manipulation.csail.mit.edu/scripts/setup/setup_manipulation_colab.py",
"setup_manipulation_colab.py")
from setup_manipulation_colab import setup_manipulation
setup_manipulation(manipulation_sha='c1bdae733682f8a390f848bc6cb0dbbf9ea98602', drake_version='0.25.0', drake_build='releases')
# python libraries
import numpy as np
import matplotlib.pyplot as plt, mpld3
from IPython.display import HTML, display
# Install pyngrok.
server_args = []
if 'google.colab' in sys.modules:
server_args = ['--ngrok_http_tunnel']
# Start a single meshcat server instance to use for the remainder of this notebook.
from meshcat.servers.zmqserver import start_zmq_server_as_subprocess
proc, zmq_url, web_url = start_zmq_server_as_subprocess(server_args=server_args)
# Let's do all of our imports here, too.
import numpy as np
from pydrake.all import (Quaternion, RigidTransform,
RollPitchYaw, RotationMatrix
)
###Output
_____no_output_____
###Markdown
Problem DescriptionIn the lecture, we learned the basics of spatial transformations. In this exercise, you will compute simple rigid transforms applying the rules you have learned in class.**These are the main steps of the exercise:**1. Compute rigid transforms of frames in various reference frames.2. Design grasp pose using spatial transformation Exercise on Rigid TransformsAs a brief review, we have covered two rules of spatial transformation in [class](http://manipulation.csail.mit.edu/pick.htmlsection3).\begin{equation}{^AX^B} {^BX^C} = {^AX^C},\end{equation}\begin{equation}[^AX^B]^{-1} = {^BX^A}.\end{equation} Note that the rules of transforms are based on rules of transforming positions and rotations listed below. Addition of positions in the same frame:\begin{equation}^Ap^B_F + ^Bp^C_F = ^Ap^C_F.\end{equation}The additive inverse:\begin{equation}^Ap^B_F = - ^Bp^A_F.\end{equation}Rotation of a point:\begin{equation}^Ap^B_G = {^GR^F} ^Ap^B_F.\end{equation}Chaining rotations:\begin{equation}{^AR^B} {^BR^C} = {^AR^C}.\end{equation}Inverse of rotations:\begin{equation}[^AR^B]^{-1} = {^BR^A}.\end{equation} Applying these rules will yield the same result as the ones computed by the former two rules.In Drake, you can multiply frames by ```pythonX_AB.multiply(X_BC)```You may also inverse a rigid transform by the [inverse](https://drake.mit.edu/pydrake/pydrake.math.html?highlight=rigidtransformpydrake.math.RigidTransform_.RigidTransform_[float].inverse) method.```pythonX_AB.inverse()``` Now suppose you have 4 frames, namely, the world frame, frame A, frame B, and frame C defined as below.-- frame A expressed in the world frame (`X_WA`)-- frame B expressed in frame A (`X_AB`)-- frame B expressed in frame C (`X_CB`)**Calcuate the following transforms by filling your code below in the designated functions.**(1) `X_WB`, frame B expressed in the world frame(2) `X_CW`, the world frame expressed in frame C
###Code
def compute_X_WB(X_WA, X_AB, X_CB):
"""
fill your code here
"""
X_WB = RigidTransform()
return X_WB
def compute_X_CW(X_WA, X_AB, X_CB):
"""
fill your code here
"""
X_CW = RigidTransform()
return X_CW
###Output
_____no_output_____
###Markdown
Design Grasp PoseThe grasp pose is commonly defined in the object frame so that the grasp pose ${^OX^G}$ is independent of the pose of the object. The grasp pose in the world frame can be computed by \begin{equation}{{^WX^G} = {}{^W}X^{O}} {^OX^G},\end{equation}where $W$ stands for the world frame and $G$ denotes the grasp frame, following the convention in the textbook. You should notice from the visualization below that the gripper frame is different from the world frame. In particular, the +y axis of the gripper frame points vertically downward, and the +z axis of the gripper points backward. This is an important observation for this exercise. **Now for your exercise, dsign a grasp pose that satisfy the conditions below**- **gripper's position should be 0.02 unit distance above the target object in the world frame**- **gripper's y axis should align with object's x axis**- **gripper's x axis should align with object's z axis**- **write grasp pose in the object frame and the world frame**
###Code
p0_WO = [-0.2, -0.65, 0.12] # object in world frame
R0_WO = RotationMatrix.MakeYRotation(np.pi/2)
X_WO = RigidTransform(R0_WO, p0_WO)
def design_grasp_pose(X_WO):
"""
fill in our code below
"""
X_OG = RigidTransform()
X_WG = RigidTransform()
return X_OG, X_WG
###Output
_____no_output_____
###Markdown
How will this notebook be Graded?If you are enrolled in the class, this notebook will be graded using [Gradescope](www.gradescope.com). You should have gotten the enrollement code on our announcement in Piazza. For submission of this assignment, you must do two things. - Download and submit the notebook `rigid_transforms.ipynb` to Gradescope's notebook submission section, along with your notebook for the other problems.We will evaluate the local functions in the notebook to see if the function behaves as we have expected. For this exercise, the rubric is as follows:- [1 pts] `compute_X_WB` is correct- [1 pts] `compute_X_CW` is correct- [2 pts] `design_grasp_pose` is correct according to the requirement
###Code
from manipulation.exercises.pick.test_rigid_transforms import TestRigidTransforms
from manipulation.exercises.grader import Grader
Grader.grade_output([TestRigidTransforms], [locals()], 'results.json')
Grader.print_test_results('results.json')
###Output
_____no_output_____
###Markdown
Exercises on Rigid Transforms
###Code
# python libraries
import numpy as np
import matplotlib.pyplot as plt, mpld3
from IPython.display import HTML, display
from pydrake.all import (
Quaternion, RigidTransform, RollPitchYaw, RotationMatrix
)
###Output
_____no_output_____
###Markdown
Problem DescriptionIn the lecture, we learned the basics of spatial transformations. In this exercise, you will compute simple rigid transforms applying the rules you have learned in class.**These are the main steps of the exercise:**1. Compute rigid transforms of frames in various reference frames.2. Design grasp pose using spatial transformation Exercise on Rigid TransformsAs a brief review, we have covered two rules of spatial transformation in [class](http://manipulation.csail.mit.edu/pick.htmlsection3).\begin{equation}{^AX^B} {^BX^C} = {^AX^C},\end{equation}\begin{equation}[^AX^B]^{-1} = {^BX^A}.\end{equation} Note that the rules of transforms are based on rules of transforming positions and rotations listed below. Addition of positions in the same frame:\begin{equation}^Ap^B_F + ^Bp^C_F = ^Ap^C_F.\end{equation}The additive inverse:\begin{equation}^Ap^B_F = - ^Bp^A_F.\end{equation}Rotation of a point:\begin{equation}^Ap^B_G = {^GR^F} ^Ap^B_F.\end{equation}Chaining rotations:\begin{equation}{^AR^B} {^BR^C} = {^AR^C}.\end{equation}Inverse of rotations:\begin{equation}[^AR^B]^{-1} = {^BR^A}.\end{equation} Applying these rules will yield the same result as the ones computed by the former two rules.In Drake, you can multiply frames by ```pythonX_AB.multiply(X_BC)```You may also inverse a rigid transform by the [inverse](https://drake.mit.edu/pydrake/pydrake.math.html?highlight=rigidtransformpydrake.math.RigidTransform_.RigidTransform_[float].inverse) method.```pythonX_AB.inverse()``` Now suppose you have 4 frames, namely, the world frame, frame A, frame B, and frame C defined as below.-- frame A expressed in the world frame (`X_WA`)-- frame B expressed in frame A (`X_AB`)-- frame B expressed in frame C (`X_CB`)**Calcuate the following transforms by filling your code below in the designated functions.**(1) `X_WB`, frame B expressed in the world frame(2) `X_CW`, the world frame expressed in frame C
###Code
def compute_X_WB(X_WA, X_AB, X_CB):
"""
fill your code here
"""
X_WB = RigidTransform()
return X_WB
def compute_X_CW(X_WA, X_AB, X_CB):
"""
fill your code here
"""
X_CW = RigidTransform()
return X_CW
###Output
_____no_output_____
###Markdown
Design Grasp PoseThe grasp pose is commonly defined in the object frame so that the grasp pose ${^OX^G}$ is independent of the pose of the object. The grasp pose in the world frame can be computed by \begin{equation}{{^WX^G} = {}{^W}X^{O}} {^OX^G},\end{equation}where $W$ stands for the world frame and $G$ denotes the grasp frame, following the convention in the textbook. You should notice from the visualization below that the gripper frame is different from the world frame. In particular, the +y axis of the gripper frame points vertically downward, and the +z axis of the gripper points backward. This is an important observation for this exercise. **Now for your exercise, dsign a grasp pose that satisfy the conditions below**- **gripper's position should be 0.02 unit distance above the target object in the world frame**- **gripper's y axis should align with object's x axis**- **gripper's x axis should align with object's z axis**- **write grasp pose in the object frame and the world frame**
###Code
p0_WO = [-0.2, -0.65, 0.12] # object in world frame
R0_WO = RotationMatrix.MakeYRotation(np.pi/2)
X_WO = RigidTransform(R0_WO, p0_WO)
def design_grasp_pose(X_WO):
"""
fill in our code below
"""
X_OG = RigidTransform()
X_WG = RigidTransform()
return X_OG, X_WG
###Output
_____no_output_____
###Markdown
How will this notebook be Graded?If you are enrolled in the class, this notebook will be graded using [Gradescope](www.gradescope.com). You should have gotten the enrollement code on our announcement in Piazza. For submission of this assignment, you must do two things. - Download and submit the notebook `rigid_transforms.ipynb` to Gradescope's notebook submission section, along with your notebook for the other problems.We will evaluate the local functions in the notebook to see if the function behaves as we have expected. For this exercise, the rubric is as follows:- [1 pts] `compute_X_WB` is correct- [1 pts] `compute_X_CW` is correct- [2 pts] `design_grasp_pose` is correct according to the requirement
###Code
from manipulation.exercises.pick.test_rigid_transforms import TestRigidTransforms
from manipulation.exercises.grader import Grader
Grader.grade_output([TestRigidTransforms], [locals()], 'results.json')
Grader.print_test_results('results.json')
###Output
_____no_output_____
###Markdown
Exercises on Rigid Transforms
###Code
# python libraries
import numpy as np
import matplotlib.pyplot as plt, mpld3
from IPython.display import HTML, display
from pydrake.all import (
Quaternion, RigidTransform, RollPitchYaw, RotationMatrix
)
###Output
_____no_output_____
###Markdown
Problem DescriptionIn the lecture, we learned the basics of spatial transformations. In this exercise, you will compute simple rigid transforms applying the rules you have learned in class.**These are the main steps of the exercise:**1. Compute rigid transforms of frames in various reference frames.2. Design grasp pose using spatial transformation Exercise on Rigid TransformsAs a brief review, we have covered two rules of spatial transformation in [class](http://manipulation.csail.mit.edu/pick.htmlsection3).$${^AX^B} {^BX^C} = {^AX^C},$$$$[^AX^B]^{-1} = {^BX^A}.$$Note that the rules of transforms are based on rules of transforming positions and rotations listed below. Addition of positions in the same frame:$$^Ap^B_F + ^Bp^C_F = ^Ap^C_F.$$The additive inverse:$$^Ap^B_F = - ^Bp^A_F.$$Rotation of a point:$$^Ap^B_G = {^GR^F} ^Ap^B_F.$$Chaining rotations:$${^AR^B} {^BR^C} = {^AR^C}.$$Inverse of rotations:$$[^AR^B]^{-1} = {^BR^A}.$$ Applying these rules will yield the same result as the ones computed by the former two rules.In Drake, you can multiply frames by ```pythonX_AB.multiply(X_BC)X_AB @ X_BC```You may also inverse a rigid transform by the [inverse](https://drake.mit.edu/pydrake/pydrake.math.html?highlight=rigidtransformpydrake.math.RigidTransform_.RigidTransform_[float].inverse) method.```pythonX_AB.inverse()``` Now suppose you have 4 frames, namely, the world frame, frame A, frame B, and frame C defined as below.-- frame A expressed in the world frame (`X_WA`)-- frame B expressed in frame A (`X_AB`)-- frame B expressed in frame C (`X_CB`)**Calcuate the following transforms by filling your code below in the designated functions.**(1) `X_WB`, frame B expressed in the world frame(2) `X_CW`, the world frame expressed in frame C
###Code
def compute_X_WB(X_WA, X_AB, X_CB):
"""
fill your code here
"""
X_WB = RigidTransform()
return X_WB
def compute_X_CW(X_WA, X_AB, X_CB):
"""
fill your code here
"""
X_CW = RigidTransform()
return X_CW
###Output
_____no_output_____
###Markdown
Design Grasp PoseThe grasp pose is commonly defined in the object frame so that the grasp pose ${^OX^G}$ is independent of the pose of the object. The grasp pose in the world frame can be computed by $${{^WX^G} = {}{^W}X^{O}} {^OX^G},$$where $W$ stands for the world frame and $G$ denotes the grasp frame, following the convention in the textbook. You should notice from the visualization below that the gripper frame is different from the world frame. In particular, the +y axis of the gripper frame points vertically downward, and the +z axis of the gripper points backward. This is an important observation for this exercise. **Now for your exercise, dsign a grasp pose that satisfy the conditions below**- **gripper's position should be 0.02 unit distance above the target object in the world frame**- **gripper's y axis should align with object's x axis**- **gripper's x axis should align with object's z axis**- **write grasp pose in the object frame and the world frame**
###Code
p0_WO = [-0.2, -0.65, 0.12] # object in world frame
R0_WO = RotationMatrix.MakeYRotation(np.pi/2)
X_WO = RigidTransform(R0_WO, p0_WO)
def design_grasp_pose(X_WO):
"""
fill in our code below
"""
X_OG = RigidTransform()
X_WG = RigidTransform()
return X_OG, X_WG
###Output
_____no_output_____
###Markdown
How will this notebook be Graded?If you are enrolled in the class, this notebook will be graded using [Gradescope](www.gradescope.com). You should have gotten the enrollement code on our announcement in Piazza. For submission of this assignment, you must do two things. - Download and submit the notebook `rigid_transforms.ipynb` to Gradescope's notebook submission section, along with your notebook for the other problems.We will evaluate the local functions in the notebook to see if the function behaves as we have expected. For this exercise, the rubric is as follows:- [1 pts] `compute_X_WB` is correct- [1 pts] `compute_X_CW` is correct- [2 pts] `design_grasp_pose` is correct according to the requirement
###Code
from manipulation.exercises.pick.test_rigid_transforms import TestRigidTransforms
from manipulation.exercises.grader import Grader
Grader.grade_output([TestRigidTransforms], [locals()], 'results.json')
Grader.print_test_results('results.json')
###Output
_____no_output_____ |
algorithm/ai_week_7.ipynb | ###Markdown
해시테이블- 키값과 Value 데이터를 가진 구조를 해시 테이블이라 부르고 해시 테이블을 이용해서 탐색- 키값을 해시 함수에 넣어서 계산하면 해시 테이블의 주소 값이 나오는데 계산된 값은 해시값 또는 해시 주소라고 함.- 주소와 우편번호의 관계
###Code
#딕셔너리로 정보찾기
d = {"Justin":13, "John":10, "Mike":9}
print(d["Justin"])
print()
d["Summer"] = 1
d
d ={}
d =dict()
#둘다 같음
s = {1,2,3} #1,2,3 이 포함된 집합
d = {1:2, 3:4} #키1에2, 키3에 4가 있는 딕셔너리
s = set() #빈집합 s
d = dict() #빈 딕셔너리
#딕셔너리를 이용해 동명이인을 찾는 알고리즘
#두번 이상 나온 이름찾기
#입력: 이름이 n개 들어 있는 리스트
#출력: n개의 이름 중 반복되는 이름의 집합
def find_same_name(a):
#1단계: 각 이름의 등장 횟수를 딕셔너리로 만듦
name_dict = {}
for name in a: #리스트 a에 있는 자료들을 차례로 반복
if name in name_dict: #이름이 name_dict에 있으면
name_dict[name] += 1 #등장횟수를 1 증가
else: #새이름이면
name_dict[name] = 1 #등장횟수를 1로 저장
#2단계: 만들어진 딕셔너리에서 등장 횟수가 2 이상인 것을 결과에 추가
result = set() #결과값을 저장할 빈 집합
for name in name_dict: #딕셔너리 name_dict 에 있는 자료들을 차례로 반복
if name_dict[name] >= 2:
result.add(name)
return result
name = ['Tom', 'Jerry','Mike','Tom']
print(find_same_name(name))
name2 = ["Tom", "Jerry", "Mike", "Tom", "Mike"]
print(find_same_name(name2) )
###Output
{'Tom'}
{'Mike', 'Tom'}
###Markdown
계산 복잡도: 시간 복잡도와 공간 복잡도- 딕셔너리를 이용해 동명이인을 찾는 문제는 모든 사람을 서로 비교하는 방법보다 더 나은 시간 복잡도를 가짐- 하지만 딕셔너리를 만들어 그 안에 모든 이름과 등장 횟수를 지정해야 하므로 더 많은 저장 공간을 사용 > 공간복잡도를 희생해 시간복잡도를 개선- 알고리즘 분석을 정확하게 하려면 시간 복잡도뿐 아니라 공간 복잡도도 함꼐 고려해야 함- 현대 컴퓨터는 대체로 저장 공간이 매우 크기 때문에 상대적으로 공간복잡도에 덜 민감한 편
###Code
#딕셔너리로 학생 번호에 해당하는 학생 이름 찾기
#입력: 학생 명부 딕셔너리 s_info, 찾는 학생 번호 find_no
#출력: 해당하는 학생 이름, 해당하는 학생 번호가 없으면 "?"
def get_name(s_info, find_no):
if find_no in s_info:
return s_info[find_no]
else:
return "?"
sample_info = {
39: "Justin",
14: "John",
67: "Mike",
105: "Summer"
}
print(get_name(sample_info, 105))
print(get_name(sample_info, 777))
###Output
Summer
?
###Markdown
모든 친구를 찾는 알고리즘
###Code
#친구 리스트에서 자신의 모든 친구를 찾는 알고리즘
#입력: 친구 관계 그래프 g, 모든 친구를 찾을 자신 start
#출력: 모든 친구의 이름
def print_all_friends(g, start):
qu = [] #기억장소1: 앞으로 처리해야 할 사람들을 큐에 저장
done = set() # 기억장소2: 이미 큐에 추가한 사람들을 집합에 기록(중복방지)
qu.append(start) #자신을 큐에 넣고 시작
done.add(start) #집합에도 추가
while qu: #큐에 처리할 사람이 남아 있는 동안
p = qu.pop(0) #큐에서 처리 대상을 한 명 꺼내
print(p) #이름을 출력하고
for x in g[p]: #그의 친구들 중에
if x not in done: #아직 큐에 추가된 적이 없는 사람을
qu.append(x) #큐에 추가하고
done.add(x) #집합에도 추가
#친구 관계 리스트
#A와 B가 친구이면
#A의 친구 리스트에도 B가 나오고, B의 친구 리스트에도 A가 나옴
f_info = {
'Summer':['John', 'Justin', 'Mike'],
'John':['Summer','Justin'],
'Justin':['John','Summer', 'Mike', 'May'],
'Mike':['Summer','Justin'],
'May':['Justin','Kim'],
'Kim':['May'],
'Tom':['Jerry'],
'Jerry':['Tom']
}
print_all_friends(f_info, 'Summer')
print()
print_all_friends(f_info, 'Jerry')
###Output
Summer
John
Justin
Mike
May
Kim
Jerry
Tom
###Markdown
그래프의 개념 Graph- 현실세계의 복잡한 작업을 시각적으로 구조화하여 표현- 이해하기 쉽고 가시적으로 설명할 때 유용한 도구- 현상이나 사물을 정점(Vertex)과 간선(Edge)으로 표현한 것- 선형 자료 구조나 트리 자료 구조로 표현하기 어려운 다대다 관계를 가지는 원소들을 표현하기 위한 자료구조 - 주요 요소간의 관계, 거리, 비용 등 다양한 주제를 표현하고 설계할 때 유용- 두 정점이 간선으로 연결되어 있으면 인접(Aduacent) 하다고 함- 간선은 두 정점의 관계를 나타냄- 그래프의 예- 전국의 도로망, 도시의 지하철, 네트워크 구성도, 전산망, 인간관계, 사회조직, 데이터구조, 분자, 생물 유전자 관계 등 - 그래프는 정점의 모음과 이 정점을 잇는 간선의 모음 간의 결합- 가중치는 친밀감의 정도를 표현하며 거리, 시간 등이 될 수도 있음그래프 종류- 무향 그래프: 간선에 방향성이 없는 그래프- 유향 그래프: 간선에 방향성이 있는 그래프 방향은 기업간의 공급 관계, 작업의 선후 관계 등 표현 가능- 가중 그래프: 그래프의 정점을 연결하는 간선 마다 일정한 값을 할당한 그래프- 간선에 할당하는 값은 각 정점간의 거리나 비용과 같은 속성이 될 수 있음그래프 용어- 인접 : 간선으로 연결되어 있는 두 정점을 일컫는 말 이웃 관계에 있다고 표현하기도 함- 경로 : 그래프에서 간선을 따라 갈 수 있는 길을 순서대로 나열한 것 - 그래프 G1에서 정점 A에서 정점 C까지의 경로는 A-B-C ,A-D-C, A-E-B-C 등이 있음 이런식- A-B-C , A-D-C 경로의 길이는 2 A-E-B-C의 길이는 3임차수- 그래프에서 임의의 정점의 차수는 해당 정점에 연결된 간선의 개수- 그래프에서 모든 정점의 차수의 총 합은 모든 간선 개수의 2배그래프의 표현- 그래프는 정점의 집합과 간선의 집합의 결합- 그래프를 표현하는 문제는 정점간의 집합과 간선의 집합의 표현 문제로 생각할 수 있음- 간선은 정점과 정점이 인접 관계에 있음을 나타내는 존재- 그래프의 표현 문제는 간선, 즉 정점과 정점의 인접 관계를 어떻게 나타내는가의 문제임- 행렬을 이용하는 방식은 인접 행렬(Adjacency Matrix)- 리스트를 이용하는 방식은 인접 리스트(Adjacency List)인접행렬- 그래프의 두 정점을 연결한 간선의 유무를 행렬로 저장- 정점끼리의 인접 관계를 나타내는 행렬- 각 정점을 행과 열의 원소로 표현- 두 정점을 연결하는 간선이 존재하면 행렬의 원소는 1, 존재하지 않으면 0으로 표현- 인접 행렬은 이해라기 쉽고 간선의 존재 여부를 즉각 알 수 있음- n * n 행렬로 표현(n: 정점의 총 수)- 원소 (i,j) = 1 정점 i와 정점 j사이에 간선이 있음- 원소 (i,j) = 0 : 정점 i와 정점 j 사이에 간선이 없음- n * n 행렬이 필요하므로 n^2에 비례하는 공간이 필요- 행렬의 준비과정에서 행렬의 모든 원소를 채우는데만 n2에 비례하는 시간 필요무향 그래프의 경우- 원소 (i,j)는 정점 i 로 부터 정점 j 로 연결되는 간선이 있는지를 나타냄가중치 있는 그래프의 경우- 원소(i, j)는 1 대신에 가중치를 가짐
###Code
###Output
_____no_output_____ |
Pandas_Vorbereitungen/Pandas Spickzettel.ipynb | ###Markdown
Eine erste Übersicht mit eingelesenen Files**df.head(n)** - zeigt per Standart die ersten fünf Reihen eines Dataframe an (oder wenn n mit einer höheren Zahl angegeben wird, die entsprechende Anzahl Reihen)**df.tail(n)** - zeigt per Standart die **letzten** fünf Reihen eines Dataframe an (oder wenn n mit einer höheren Zahl angegeben wird, die entsprechende Anzahl Reihen)**df.shape** - wie viele Einträge hat das Dataframe?**len(df)** - wie viele Einträge hat das Dataframe?**df.dtypes** - Was für Datenfelder kommen darin vor? (integers, Floats, objects)**df.describe()** - beschreibt das Dataframe**df.columns** - listet alle Kolonnentitel auf**df.values** - Eine Matrix des Frameinhalts, die ausschaut wie ein CSV
###Code
df.head()
df.tail()
df.shape()
len (df)
df.dtypes
df.describe()
df.columns
df.values
###Output
_____no_output_____
###Markdown
Hat es unschöne Tabs? Und andere Handlings...**sep = "\t"** - für Files, die per Tab getrennt sind**header = None** - falls das csv keinen Header hat**skiprows = n** - überspringt Anzahl n Rows (Reihen)**NaN** - Platzhalter für Not a Number (Missing Data, keine Zahl)**np.nan** - Code für Umgang mit NaN's (braucht Librarie "Numpy" as np)
###Code
sep = "\t"
header = None
skiprows = n
np.nan
###Output
_____no_output_____
###Markdown
Selecting stuff**df"** - das ganze Dataframe**df.field1** oder **df["field1"]** - eine Kolonne (Spalte) anwählen**df[["field1", "field2"]]** - mehrere Spalten anwählen**df[condition]** - wähle Reihen nur, wenn der Boolean True zutrifft--- Beispiel: condition = df["field" == value]**df.loc[index]** - get row at particular index**df.iloc[integer]** - behandelt einen Index, wie es eine Zahlenreihe wäre**index_col="column"** - definiert die Spalte, die als Index verwendet werden soll
###Code
df.field1
df["field1"]
df[["field1, field2"]]
df[condition]
df.loc[index]
df.iloc[integer]
index_col="column"
###Output
_____no_output_____
###Markdown
Statistiken aggregieren**.max()** - maximum (Wert)**.min()** - minimum (Wert)**.mean** - Durchschnitt**.std()** - Standart Deviation**.sum()** - Summe**.count()** - zähle**.size()** - useful for double-groups, ähnlich zu value_counts Dataframes modifizieren**df.copy()** - Kopiert ein Dataframe (anstelle von nur darauf zu referenzieren)**df.set_index("field")** - ändert die Index Spalte zu "field" (Spaltenname, der vergeben wird) - **inplace=True** - macht die Änderungen auf dem Objekt, nicht einer Kopie**df.rename_axis("Name")** - Ändert den Namen einer Spalte. Mit "None" dann der Spaltenname gelöscht werden. - **inplace=True** - macht die Änderungen auf dem Objekt, nicht einer Kopie
###Code
df.copy()
df.set_index("field")
df.rename_axis("Name")
###Output
_____no_output_____
###Markdown
Spalten (Columns) in Dataframes modifizieren**df.insert(pos, "field1", values)** - fügt eine neue Spalte "field1" mit pos-Werten ein**df.pop("field1")** - löscht Spalte "field1"**df.assign(newfield = df.["field1"] ... )** - ordnet Werte neuen Spalten zu (original bleibt bestehen) - **inplace = True** - führt die Änderungen auf den Dataframe aus, keiner Kopie
###Code
df.insert(pos, "field1", values)
df.pop("field1")
df.assign(newfield = df.["field1"]...)
###Output
_____no_output_____
###Markdown
Zeilen (Rows) in Dataframes modifizieren**df.append(series,dataframe)** - fügt Zeilen ein, gibt ein neues Objekt zurück - **inplace = True** - führt die Änderungen auf dem Dataframe aus, keiner Kopie**df.drop(df[condition].index)** - löscht Zeilen eines Dataframe nach Conditions - **inplace = True** - führt die Änderungen auf dem Dataframe aus, keiner Kopie
###Code
df.append(series, dataframe)
df.drop(df[condition].index)
###Output
_____no_output_____
###Markdown
Datenstrukturen modifizieren**.value_counts()** - zählt Inhalte (integers)**.sort_values** - ordnet Inhalte an - **ascending=False** - in absteigender Reihenfolge**df.rename_axis("Name")** - Ändert den Namen einer Spalte. Mit "None" dann der Spaltenname gelöscht werden. - **inplace=True** - macht die Änderungen auf dem Objekt, nicht einer Kopie**df.sort_index()** - Sortiert nach Index, nicht nach Werten**df.pivot()** - transformiert lange Datenfiles in weite Datenfiles**df.melt()** - transformiert weite in lange Datenfiles**df.unstack** - transform groupby-subrows into columns (gut für stacked barcharts)**df.transpose** - switches row and columns over the whole dataset (Kurzbefehl: **df.T**)
###Code
df["spalte"].value_counts().sort_values(ascending=False).head(5)
df["spalte"].sort_values
df.sort_index()
df.pivot()
df.melt()
df.unstack()
df.transpose()
###Output
_____no_output_____
###Markdown
Datenstrukturen gruppieren**df.groupby("field1")["field2"].function ()** - für gruppierten Output (benötigt Kolonnen) und eine funktion**df.groupby(["field1", "field2"])** - Groupby-Befehl auf zwei Leveln
###Code
df.groupby("field1")["field2"].function ()
df.groupby(["field1", "field2"])
###Output
_____no_output_____
###Markdown
Datenstrukturen kombinieren/verbinden**df.merge(df2)** - verbindet ein Dataframe mit einem zweiten**df.join(df2)** - verbindet/fügt an (join) ein Dataframe mit identischer Anzahl Zeilen zu einem anderen, horizontal**pd.concat([df1, df2])** - fügt alle aufgerufenen Dataframes in einer Liste zusammen, vertikal
###Code
df.merge(df2)
on = "field" # fieldnames to match (if they have the same name)
left_on = "df-field1" #fieldname to match on the left side
right_on = "df2-field" #fieldname to match on right side
left_index= True #weather to use the index as the left-side match field
right_index= True #weather to use the index as the right-side match field
how="inner/left/right/outer" # just like in SQL ?????
df.join(df2)
pd.concat([df1, df2])
axis=1 #fügt horizontal an, nicht vertikal
ignore_index= True #konstruiert neuen Index (verwendet nicht den existierenden)
###Output
_____no_output_____
###Markdown
PANDAS Vorbereitungen Quelle: Simon Schmid, CAS Datenjournalismus MAZ 2018Aufbereitung für meine Basisbedürfnisse ; ) Basics**pip install pandas** - um Pandas zu installieren (Command Line) Importiere benötigte Libraries**import pandas as pd****import numpy as np** - benötigt für Umgang mit NaN's**import datetime** - für Zeit- und Datumhandling**%matplotlib inline** - zur Visualisierung von Charts**import seaborn as sns** - Charts aufhübschen ; ))) Hilfe nötig?**?** - lass dir helfen (Bsp. schreib type pd.head?)
###Code
import pandas as pd
import numpy as np
import datetime
%matplotlib inline
import seaborn as sns
###Output
_____no_output_____
###Markdown
Files einlesen**df = pd.read_csv("filename.csv")** - lädt das File in ein Dataframe, wenn die Originaldateiim gleichen Ordner liegt, wie das JupyterNotebook**df = pd.read_excel("filename.xlsx")** - gleich wie obenstehendLiegt die zu ladende Datei in einem eigenen Ordner:**path = "dataprojects/wherezurichborn/filename.csv"** - in Variable gespeicherter Pfad zur Datei (Schritt 1)**df = pd.read_csv(path)** (Schritt 2)
###Code
df = pd.read_csv("filename.csv")
df = pd.read_excel("filename.xlsx")
path = "Ordnerstruktur/wo_ist_das_file/filename.csv"
df = pd.read_csv(path)
###Output
_____no_output_____
###Markdown
Files ausgeben/abspeichern**df.to_csv** - sichert ein Dataframe in ein CSV**mit index = False** - sichere die Indexkolonne, die ein Dataframe automatisch erstellt, nicht mit**file_oder_liste.to_csv(path, index=True)** - sortiert Werte in absteigender Anzahl
###Code
df.to_csv (path, index = True)
index = False
###Output
_____no_output_____ |
Day11.ipynb | ###Markdown
--- Day 11: Dumbo Octopus --- [](https://mybinder.org/v2/gh/oddrationale/AdventOfCode2021FSharp/main?urlpath=lab%2Ftree%2FDay11.ipynb) You enter a large cavern full of rare bioluminescent dumbo octopuses! They seem to not like the Christmas lights on your submarine, so you turn them off for now.There are 100 octopuses arranged neatly in a 10 by 10 grid. Each octopus slowly gains energy over time and flashes brightly for a moment when its energy is full. Although your lights are off, maybe you could navigate through the cave without disturbing the octopuses if you could predict when the flashes of light will happen.Each octopus has an energy level - your submarine can remotely measure the energy level of each octopus (your puzzle input). For example:5483143223274585471152645561736141336146635738547841675246452176841721688288113448468485545283751526The energy level of each octopus is a value between 0 and 9. Here, the top-left octopus has an energy level of 5, the bottom-right one has an energy level of 6, and so on.You can model the energy levels and flashes of light in steps. During a single step, the following occurs:First, the energy level of each octopus increases by 1.Then, any octopus with an energy level greater than 9 flashes. This increases the energy level of all adjacent octopuses by 1, including octopuses that are diagonally adjacent. If this causes an octopus to have an energy level greater than 9, it also flashes. This process continues as long as new octopuses keep having their energy level increased beyond 9. (An octopus can only flash at most once per step.)Finally, any octopus that flashed during this step has its energy level set to 0, as it used all of its energy to flash.Adjacent flashes can cause an octopus to flash on a step even if it begins that step with very little energy. Consider the middle octopus with 1 energy in this situation:Before any steps:1111119991191911999111111After step 1:3454340004500054000434543After step 2:4565451115611165111545654An octopus is highlighted when it flashed during the given step.Here is how the larger example above progresses:Before any steps:5483143223274585471152645561736141336146635738547841675246452176841721688288113448468485545283751526After step 1:6594254334385696582263756672847252447257746849658952786357563287952832799399224559579596656394862637After step 2:8807476555508908705485978896088485769600870090880066000889896800005943000000745690000008768700006848After step 3:0050900866850080057599000000399700000041993508006377123000007911250009221113000004211250000021119000After step 4:2263031977092303169700322211500041111163007619117400534111220042361120553224112215322472111132230211After step 5:4484144000204414400022533334931152333274118730328511646332331153472231664335223326433583222243341322After step 6:5595255111315525522233644446052263444496229841439622757443442264583342775446334437544694333354452433After step 7:6707366222437736633344755558273496655709350062560935099555663486694453886558555548655806444465574644After step 8:7818477333548847744456976669494608766830473494673047400976886900007564000000966680000047556800007755After step 9:9060000644780000097669000000805840000082585800009369624000008021250009222113000991111280977911119976After step 10:0481112976003111200900411125040081111406009911130600935112330442361130553225235005322506000032240000After step 10, there have been a total of 204 flashes. Fast forwarding, here is the same configuration every 10 steps:After step 20:3936556452568655680644965556904448655580445686557056800865777000009896000000034460000003644600009543After step 30:0643334118425333461133743334582225333337222933333822767333332754574565554445851194444471117944446119After step 40:6211111981042111111900421111150003111115000311111600656111110532351111332223459722222229762222222762After step 50:9655556447486555680544865556904458655580457486557057000865666000009887800000053368000006335680000538After step 60:2533334200274333464022643334582225333337222533333822878333333854573455185445861111754471111115446111After step 70:8211111164042111116600421111140004211115000021111600656111110532351111732223511757222234754572222754After step 80:1755555697596555560944865556804458655580457086557057000865667000008666000000099000000008000000000000After step 90:7433333522264333352222643334582226433337222243333822878333332854573333485445833333877793333333333333After step 100:0397666866074976691800539769330004297822000422989200532228770532222966932222896679222868666789998766After 100 steps, there have been a total of 1656 flashes.Given the starting energy levels of the dumbo octopuses in your cavern, simulate 100 steps. How many total flashes are there after 100 steps?
###Code
let input =
File.ReadAllLines @"input/11.txt"
|> Array.map (fun line ->
line
|> Seq.map (fun c -> c |> string |> int)
|> Array.ofSeq)
let increaseEnergy (input: int array array) =
input |> Array.map (fun row -> row |> Array.map ((+) 1))
let adjacent (input: int array array) (x, y) =
let maxY = input |> Seq.length
let maxX = input |> Seq.head |> Seq.length
[
( 0, -1) // up
( 1, -1) // top-right
( 1, 0) // right
( 1, 1) // bottom-right
( 0, 1) // down
(-1, 1) // bottom-left
(-1, 0) // left
(-1, -1) // top-left
]
|> List.map (fun (x', y') -> x + x', y + y')
|> List.filter (fun (x', y') ->
-1 < x' && x' < maxX &&
-1 < y' && y' < maxY)
let flash (input: int array array) (x, y) =
let neighbors = adjacent input (x, y)
input
|> Array.mapi (fun y' row ->
row
|> Array.mapi (fun x' i ->
match (x', y') with
| (x'', y'') when neighbors |> List.contains (x'', y'') -> i + 1
| _ -> i))
let flashAll (input: int array array) =
let greaterThanNine input =
input
|> Array.mapi (fun y row ->
row
|> Array.mapi (fun x i ->
if i > 9 then
Some (x, y)
else
None))
|> Array.collect id
|> Array.choose id
|> List.ofArray
let rec loop input flashed =
let newFlashes = input |> greaterThanNine |> List.except flashed
if newFlashes |> Seq.isEmpty then
input
else
loop (newFlashes |> Seq.fold (fun acc (x, y) -> flash acc (x, y)) input) (flashed @ newFlashes)
loop input List.empty
let resetToZero (input: int array array) =
input
|> Array.mapi (fun y row ->
row
|> Array.mapi (fun x i ->
match i with
| i when i > 9 -> 0
| _ -> i))
let countZeros (input: int array array) =
input
|> Array.collect id
|> Array.filter (fun i -> i = 0)
|> Array.length
let step = increaseEnergy >> flashAll >> resetToZero
let rec steps input =
seq {
yield input
yield! input |> step |> steps
}
#!time
input
|> steps
|> Seq.truncate 101
|> Seq.map countZeros
|> Seq.sum
###Output
_____no_output_____
###Markdown
--- Part Two --- It seems like the individual flashes aren't bright enough to navigate. However, you might have a better option: the flashes seem to be synchronizing!In the example above, the first time all octopuses flash simultaneously is step 195:After step 193:5877777777887777777777777777777777777777777777777777777777777777777777777777777777777777777777777777After step 194:6988888888998888888888888888888888888888888888888888888888888888888888888888888888888888888888888888After step 195:0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000If you can calculate the exact moments when the octopuses will all flash simultaneously, you should be able to navigate through the cavern. What is the first step during which all octopuses flash?
###Code
let allZeros (input: int array array) =
input
|> Array.collect id
|> Array.forall ((=) 0)
#!time
input
|> steps
|> Seq.findIndex allZeros
###Output
_____no_output_____
###Markdown
读写文本文件读取文本文件时,需要用open函数,文件模式设置成‘r’(如果不指定,默认值也是‘r’)
###Code
def main():
f = open('ad.txt','r')
print(f.read())
f.close()
if __name__ == '__main__':
main()
###Output
谁最帅!
你最帅!
再说一次,谁最帅
我最帅!
###Markdown
如果open函数指定的文件并不存在或者无法打开,那么将引发异常状况导致程序崩溃。为了让代码有一定的健壮性和容错性,以使用Python的异常机制对可能在运行时发生状况的代码进行适当的处理
###Code
def main():
f = None
try:
f = open('asd.txt','r')
print(f.read())
except FileNotFoundError:
print('无法打开指定文件')
except LookupError:
print('指定了未知的编码')
except UnicodeDecodeError:
print('读取文件时解码错误')
finally:
if f:
f.close()
if __name__ == '__main__':
main()
def main():
try:
with open('asd.txt', 'r') as f:
print(f.read())
except FileNotFoundError:
print('无法打开指定的文件!')
except LookupError:
print('指定了未知的编码!')
except UnicodeDecodeError:
print('读取文件时解码错误!')
if __name__ == '__main__':
main()
###Output
###Markdown
除了使用文件对象的read方法读取文件之外,还可以使用for-in循环逐行读取或者用readlines方法将文件按行读取到一个列表容器中
###Code
import time
def main():
# 一次性读取整个文件内容
with open('ad.txt','r') as f:
print(f.read())
# 通过for-in循环逐个读取
with open('ad.txt','r') as f:
for line in f:
print(line,end="")
time.sleep(0.5)
print()
# 读取文件按行读取到列表中
with open('ad.txt') as f:
lines = f.readlines()
print(lines)
if __name__== '__main__':
main()
###Output
谁最帅!
你最帅!
再说一次,谁最帅
我最帅!
谁最帅!
你最帅!
再说一次,谁最帅
我最帅!
['谁最帅!\n', '你最帅!\n', '再说一次,谁最帅\n', '我最帅!']
###Markdown
将文本信息写入把open函数的文件模式设置为‘w’ 将1~9999 之间的素数分别写入三个文件中(1-99之间的素数保存在a.txt中,100-999之间的素数保存在b.txt中,1000-9999之间的素数保存在c.txt中)。
###Code
from math import sqrt
def is_prime(n):
"""判断素数的函数"""
assert n> 0
for factor in range(2,int(sqrt(n)) + 1):
if n % factor == 0:
return False
return True if n != 1 else False
def main():
filenames = ('a.txt','b.txt','c.txt')
fs_list = []
try:
for filename in filenames:
fs_list.append(open(filename,'w',encoding='gbk'))
for number in range(1,10000):
if is_prime(number):
if number< 100:
fs_list[0].write(str(number) + '\n')
elif number <1000:
fs_list[1].write(str(number) + '\n')
else:
fs_list[2].write(str(number) + '\n')
except IOError as ex:
print(ex)
print('写文件时发生错误!')
finally:
for fs in fs_list:
fs.close()
print('操作完成!')
if __name__ == '__main__':
main()
###Output
操作完成!
###Markdown
读取二进制文件
###Code
def main():
try:
with open('dongna.jpg','rb') as fs1:
data = fs1.read()
print(type(data))
with open('dongna.jpg','wb') as fs2:
fs2.write(data)
except FileNotFoundError as e:
print('指定的文件无法打开.')
except IOError as e:
print('读写文件时出现错误.')
print('程序执行结束.')
if __name__ == '__main__':
main()
###Output
<class 'bytes'>
程序执行结束.
###Markdown
读写JSON文件如果希望把一个列表或者一个字典中的数据保存到文件中又该怎么做呢?答案是将数据以JSON格式进行保存JSON是“JavaScript Object Notation”的缩写,它本来是JavaScript语言中创建对象的一种字面量语法
###Code
import json
def main():
mydict = {
'name':'hanhan',
'age':'47',
'qq':12345689,
'friends':['试试','郍郍'],
'cars': [
{'brand': 'BYD', 'max_speed': 180},
{'brand': 'Audi', 'max_speed': 280},
{'brand': 'Benz', 'max_speed': 320}
]
}
try:
with open('data.json','w') as fs:
json.dump(mydict,fs)
except IOError as e:
print('保留数据完整')
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
json 模块主要有四个比较重要的函数,分别是:dump--将python对象按照json格式序列化到文件中dumps- 将python对象处理成json格式的字符串load - 将文件中的json数据反序列化成对象loads- 将字符串的内容发序列化成python对象
###Code
import requests
import json
def main():
resp = requests.get('http://api.tianapi.com/guonei/?key=APIKey&num=10')
data_model = json.loads(resp.text)
for news in data_model['newslist']:
print(news['title'])
if __name__ == '__main__':
main()
###Output
_____no_output_____
###Markdown
[](https://mybinder.org/v2/gh/oddrationale/AdventOfCode2020CSharp/main?urlpath=lab%2Ftree%2FDay11.ipynb) --- Day 11: Seating System ---
###Code
using System.IO;
var initialSeatLayout = File.ReadAllLines(@"input/11.txt").Select(line => line.ToCharArray()).ToArray();
char[][] GenerateNextLayout(char[][] layout)
{
var nextLayout = layout.Select(arr => (char[])arr.Clone()).ToArray();
var maxRow = layout.Length;
var maxCol = layout.First().Length;
ValueTuple<int, int>[] directions = {
(1, 0),
(-1, 0),
(0, -1),
(0, 1),
(-1, 1),
(1, 1),
(1, -1),
(-1, -1),
};
for (int i = 0; i < maxRow; i++)
{
for (int j = 0; j < maxCol; j++)
{
if (layout[i][j] == '.')
{
continue;
}
var adjacents = new List<char>();
foreach (var d in directions)
{
if (0 <= i + d.Item1 && i + d.Item1 < maxRow &&
0 <= j + d.Item2 && j + d.Item2 < maxCol &&
layout[i + d.Item1][j + d.Item2] != '.')
{
adjacents.Add(layout[i + d.Item1][j + d.Item2]);
}
}
if (layout[i][j] == 'L' && !adjacents.Where(seat => seat == '#').Any())
{
nextLayout[i][j] = '#';
}
else if (layout[i][j] == '#' && adjacents.Where(seat => seat == '#').Count() >= 4)
{
nextLayout[i][j] = 'L';
}
}
}
return nextLayout;
}
int CountFinalOccupiedSeats(char[][] layout)
{
char[][] currentLayout;
var nextLayout = layout;
do
{
currentLayout = nextLayout;
nextLayout = GenerateNextLayout(currentLayout);
} while (string.Join("\n", currentLayout.Select(row => string.Join(string.Empty, row)))
!= string.Join("\n", nextLayout.Select(row => string.Join(string.Empty, row))));
return currentLayout.SelectMany(row => row).Where(seat => seat == '#').Count();
}
CountFinalOccupiedSeats(initialSeatLayout)
###Output
_____no_output_____
###Markdown
--- Part Two ---
###Code
char[][] GenerateNextLayout2(char[][] layout)
{
var nextLayout = layout.Select(arr => (char[])arr.Clone()).ToArray();
var maxRow = layout.Length;
var maxCol = layout.First().Length;
ValueTuple<int, int>[] directions = {
(1, 0),
(-1, 0),
(0, -1),
(0, 1),
(-1, 1),
(1, 1),
(1, -1),
(-1, -1),
};
for (int i = 0; i < maxRow; i++)
{
for (int j = 0; j < maxCol; j++)
{
if (layout[i][j] == '.')
{
continue;
}
var adjacents = new List<char>();
foreach (var d in directions)
{
var steps = 1;
while (0 <= i + steps*d.Item1 && i + steps*d.Item1 < maxRow
&& 0 <= j + steps*d.Item2 && j + steps*d.Item2 < maxCol)
{
if (layout[i + steps*d.Item1][j + steps*d.Item2] != '.')
{
adjacents.Add(layout[i + steps*d.Item1][j + steps*d.Item2]);
break;
}
steps++;
}
}
if (layout[i][j] == 'L' && !adjacents.Where(seat => seat == '#').Any())
{
nextLayout[i][j] = '#';
}
else if (layout[i][j] == '#' && adjacents.Where(seat => seat == '#').Count() >= 5)
{
nextLayout[i][j] = 'L';
}
}
}
return nextLayout;
}
int CountFinalOccupiedSeats2(char[][] layout)
{
char[][] currentLayout;
var nextLayout = layout;
do
{
currentLayout = nextLayout;
nextLayout = GenerateNextLayout2(currentLayout);
} while (string.Join("\n", currentLayout.Select(row => string.Join(string.Empty, row)))
!= string.Join("\n", nextLayout.Select(row => string.Join(string.Empty, row))));
return currentLayout.SelectMany(row => row).Where(seat => seat == '#').Count();
}
CountFinalOccupiedSeats2(initialSeatLayout)
###Output
_____no_output_____ |
07_Visualization/Tips/Exercises_with_code_and_solutions.ipynb | ###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
UsageError: Line magic function `%` not found.
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
TipsCheck out [Tips Visualization Exercises Video Tutorial](https://youtu.be/oiuKFigW4YM) to watch a data scientist go through the exercises Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
TipsCheck out [Tips Visualization Exercises Video Tutorial](https://youtu.be/oiuKFigW4YM) to watch a data scientist go through the exercises Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
/Users/giovanni/Desktop/AI_ML_DL_DataScience/anaconda3/envs/DL/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
/Users/giovanni/Desktop/AI_ML_DL_DataScience/anaconda3/envs/DL/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True)
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
UsageError: Line magic function `%` not found.
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
TipsCheck out [Tips Visualization Exercises Video Tutorial](https://youtu.be/oiuKFigW4YM) to watch a data scientist go through the exercises Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
TipsCheck out [Tips Visualization Exercises Video Tutorial](https://youtu.be/oiuKFigW4YM) to watch a data scientist go through the exercises Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____
###Markdown
Tips Introduction:This exercise was created based on the tutorial and documentation from [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/index.html) The dataset being used is tips from Seaborn. Step 1. Import the necessary libraries:
###Code
import pandas as pd
# visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
# print the graphs in the notebook
% matplotlib inline
# set seaborn style to white
sns.set_style("white")
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv). Step 3. Assign it to a variable called tips
###Code
url = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Tips/tips.csv'
tips = pd.read_csv(url)
tips.head()
###Output
_____no_output_____
###Markdown
Step 4. Delete the Unnamed 0 column
###Code
del tips['Unnamed: 0']
tips.head()
###Output
_____no_output_____
###Markdown
Step 5. Plot the total_bill column histogram
###Code
# create histogram
ttbill = sns.distplot(tips.total_bill);
# set lables and titles
ttbill.set(xlabel = 'Value', ylabel = 'Frequency', title = "Total Bill")
# take out the right and upper borders
sns.despine()
###Output
_____no_output_____
###Markdown
Step 6. Create a scatter plot presenting the relationship between total_bill and tip
###Code
sns.jointplot(x ="total_bill", y ="tip", data = tips)
###Output
_____no_output_____
###Markdown
Step 7. Create one image with the relationship of total_bill, tip and size. Hint: It is just one function.
###Code
sns.pairplot(tips)
###Output
_____no_output_____
###Markdown
Step 8. Present the relationship between days and total_bill value
###Code
sns.stripplot(x = "day", y = "total_bill", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 9. Create a scatter plot with the day as the y-axis and tip as the x-axis, differ the dots by sex
###Code
sns.stripplot(x = "tip", y = "day", hue = "sex", data = tips, jitter = True);
###Output
_____no_output_____
###Markdown
Step 10. Create a box plot presenting the total_bill per day differetiation the time (Dinner or Lunch)
###Code
sns.boxplot(x = "day", y = "total_bill", hue = "time", data = tips);
###Output
_____no_output_____
###Markdown
Step 11. Create two histograms of the tip value based for Dinner and Lunch. They must be side by side.
###Code
# better seaborn style
sns.set(style = "ticks")
# creates FacetGrid
g = sns.FacetGrid(tips, col = "time")
g.map(plt.hist, "tip");
###Output
_____no_output_____
###Markdown
Step 12. Create two scatterplots graphs, one for Male and another for Female, presenting the total_bill value and tip relationship, differing by smoker or no smoker They must be side by side.
###Code
g = sns.FacetGrid(tips, col = "sex", hue = "smoker")
g.map(plt.scatter, "total_bill", "tip", alpha =.7)
g.add_legend();
###Output
_____no_output_____ |
notebooks/tutorials/02_mednist_app.ipynb | ###Markdown
Deploying a MedNIST Classifier App with MONAI Deploy App SDKThis tutorial demos the process of packaging up a trained model using MONAI Deploy App SDK into an artifact which can be run as a local program performing inference, a workflow job doing the same, and a Docker containerized workflow execution.In this tutorial, we will train a MedNIST classifier like the [MONAI tutorial here](https://github.com/Project-MONAI/tutorials/blob/master/2d_classification/mednist_tutorial.ipynb) and then implement & package the inference application, executing the application locally. Train a MedNIST classifier model with MONAI Core Setup environment
###Code
# Install necessary packages for MONAI Core
!python -c "import monai" || pip install -q "monai[pillow, tqdm]"
!python -c "import ignite" || pip install -q "monai[ignite]"
!python -c "import gdown" || pip install -q "monai[gdown]"
# Install MONAI Deploy App SDK package
!python -c "import monai.deploy" || pip install -q "monai-deploy-app-sdk"
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import shutil
import tempfile
import glob
import PIL.Image
import torch
import numpy as np
from ignite.engine import Events
from monai.apps import download_and_extract
from monai.config import print_config
from monai.networks.nets import DenseNet121
from monai.engines import SupervisedTrainer
from monai.transforms import (
AddChannel,
Compose,
LoadImage,
RandFlip,
RandRotate,
RandZoom,
ScaleIntensity,
EnsureType,
)
from monai.utils import set_determinism
set_determinism(seed=0)
print_config()
###Output
MONAI version: 0.6.0
Numpy version: 1.19.5
Pytorch version: 1.9.0
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0ad9e73639e30f4f1af5a1f4a45da9cb09930179
Optional dependencies:
Pytorch Ignite version: 0.4.5
Nibabel version: 3.2.1
scikit-image version: 0.17.2
Pillow version: 8.3.1
Tensorboard version: 2.6.0
gdown version: 3.13.0
TorchVision version: 0.10.0
ITK version: 5.2.0
tqdm version: 4.62.1
lmdb version: NOT INSTALLED or UNKNOWN VERSION.
psutil version: 5.8.0
pandas version: 1.1.5
einops version: 0.3.2
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Download datasetThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),the RSNA Bone Age Challenge(https://www.rsna.org/education/ai-resources-and-training/ai-image-challenge/rsna-pediatric-bone-age-challenge-2017),and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).If you use the MedNIST dataset, please acknowledge the source.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
resource = "https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE"
md5 = "0bc7306e7427e00ad1c5526a6677552d"
compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
subdirs = sorted(glob.glob(f"{data_dir}/*/"))
class_names = [os.path.basename(sd[:-1]) for sd in subdirs]
image_files = [glob.glob(f"{sb}/*") for sb in subdirs]
image_files_list = sum(image_files, [])
image_class = sum(([i] * len(f) for i, f in enumerate(image_files)), [])
image_width, image_height = PIL.Image.open(image_files_list[0]).size
print(f"Label names: {class_names}")
print(f"Label counts: {list(map(len, image_files))}")
print(f"Total image count: {len(image_class)}")
print(f"Image dimensions: {image_width} x {image_height}")
###Output
Label names: ['AbdomenCT', 'BreastMRI', 'CXR', 'ChestCT', 'Hand', 'HeadCT']
Label counts: [10000, 8954, 10000, 10000, 10000, 10000]
Total image count: 58954
Image dimensions: 64 x 64
###Markdown
Setup and trainHere we'll create a transform sequence and train the network, omitting validation and testing since we know this does indeed work and it's not needed here:(train_transforms)=
###Code
train_transforms = Compose(
[
LoadImage(image_only=True),
AddChannel(),
ScaleIntensity(),
RandRotate(range_x=np.pi / 12, prob=0.5, keep_size=True),
RandFlip(spatial_axis=0, prob=0.5),
RandZoom(min_zoom=0.9, max_zoom=1.1, prob=0.5),
EnsureType(),
]
)
class MedNISTDataset(torch.utils.data.Dataset):
def __init__(self, image_files, labels, transforms):
self.image_files = image_files
self.labels = labels
self.transforms = transforms
def __len__(self):
return len(self.image_files)
def __getitem__(self, index):
return self.transforms(self.image_files[index]), self.labels[index]
# just one dataset and loader, we won't bother with validation or testing
train_ds = MedNISTDataset(image_files_list, image_class, train_transforms)
train_loader = torch.utils.data.DataLoader(train_ds, batch_size=300, shuffle=True, num_workers=10)
device = torch.device("cuda:0")
net = DenseNet121(spatial_dims=2, in_channels=1, out_channels=len(class_names)).to(device)
loss_function = torch.nn.CrossEntropyLoss()
opt = torch.optim.Adam(net.parameters(), 1e-5)
max_epochs = 5
def _prepare_batch(batch, device, non_blocking):
return tuple(b.to(device) for b in batch)
trainer = SupervisedTrainer(device, max_epochs, train_loader, net, opt, loss_function, prepare_batch=_prepare_batch)
@trainer.on(Events.EPOCH_COMPLETED)
def _print_loss(engine):
print(f"Epoch {engine.state.epoch}/{engine.state.max_epochs} Loss: {engine.state.output[0]['loss']}")
trainer.run()
###Output
Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448272031/work/c10/core/TensorImpl.h:1156.)
###Markdown
The network will be saved out here as a Torchscript object named `classifier.zip`
###Code
torch.jit.script(net).save("classifier.zip")
###Output
_____no_output_____
###Markdown
Implementing and Packaging Application with MONAI Deploy App SDKBased on the Torchscript model(`classifier.zip`), we will implement an application that process an input Jpeg image and write the prediction(classification) result as JSON file(`output.json`). Creating Operators and connecting them in Application classWe used the following [train transforms](train_transforms) as pre-transforms during the training.```{code-block} python---lineno-start: 1emphasize-lines: 3,4,5,9caption: | Train transforms used in training---train_transforms = Compose( [ LoadImage(image_only=True), AddChannel(), ScaleIntensity(), RandRotate(range_x=np.pi / 12, prob=0.5, keep_size=True), RandFlip(spatial_axis=0, prob=0.5), RandZoom(min_zoom=0.9, max_zoom=1.1, prob=0.5), EnsureType(), ])````RandRotate`, `RandFlip`, and `RandZoom` transforms are used only for training and those are not necessary during the inference.In our inference application, we will define two operators:1. `LoadPILOperator` - Load a JPEG image from the input path and pass the loaded image object to the next operator. - This Operator does similar job with `LoadImage(image_only=True)` transform in *train_transforms*, but handles only one image. - **Input**: a file path ([`DataPath`](/modules/_autosummary/monai.deploy.core.domain.DataPath)) - **Output**: an image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image))2. `MedNISTClassifierOperator` - Pre-transform the given image by using MONAI's `Compose` class, feed to the Torchscript model (`classifier.zip`), and write the prediction into JSON file(`output.json`) - Pre-transforms consist of three transforms -- `AddChannel`, `ScaleIntensity`, and `EnsureType`. - **Input**: an image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image)) - **Output**: a folder path that the prediction result(`output.json`) would be written ([`DataPath`](/modules/_autosummary/monai.deploy.core.domain.DataPath))The workflow of the application would look like this.```{mermaid}%%{init: {"theme": "base", "themeVariables": { "fontSize": "16px"}} }%%classDiagram direction LR LoadPILOperator --|> MedNISTClassifierOperator : image...image class LoadPILOperator { image : DISK image(out) IN_MEMORY } class MedNISTClassifierOperator { image : IN_MEMORY output(out) DISK }``` Setup importsLet's import necessary classes/decorators and define `MEDNIST_CLASSES`.
###Code
import monai.deploy.core as md # 'md' stands for MONAI Deploy (or can use 'core' instead)
from monai.deploy.core import (
Application,
DataPath,
ExecutionContext,
Image,
InputContext,
IOType,
Operator,
OutputContext,
)
from monai.transforms import AddChannel, Compose, EnsureType, ScaleIntensity
MEDNIST_CLASSES = ["AbdomenCT", "BreastMRI", "CXR", "ChestCT", "Hand", "HeadCT"]
###Output
_____no_output_____
###Markdown
Creating Operator classes LoadPILOperator
###Code
@md.input("image", DataPath, IOType.DISK)
@md.output("image", Image, IOType.IN_MEMORY)
@md.env(pip_packages=["pillow"])
class LoadPILOperator(Operator):
"""Load image from the given input (DataPath) and set numpy array to the output (Image)."""
def compute(self, op_input: InputContext, op_output: OutputContext, context: ExecutionContext):
import numpy as np
from PIL import Image as PILImage
input_path = op_input.get().path
if input_path.is_dir():
input_path = next(input_path.glob("*.*")) # take the first file
image = PILImage.open(input_path)
image = image.convert("L") # convert to greyscale image
image_arr = np.asarray(image)
output_image = Image(image_arr) # create Image domain object with a numpy array
op_output.set(output_image)
###Output
_____no_output_____
###Markdown
MedNISTClassifierOperator
###Code
@md.input("image", Image, IOType.IN_MEMORY)
@md.output("output", DataPath, IOType.DISK)
@md.env(pip_packages=["monai"])
class MedNISTClassifierOperator(Operator):
"""Classifies the given image and returns the class name."""
@property
def transform(self):
return Compose([AddChannel(), ScaleIntensity(), EnsureType()])
def compute(self, op_input: InputContext, op_output: OutputContext, context: ExecutionContext):
import json
import torch
img = op_input.get().asnumpy() # (64, 64), uint8
image_tensor = self.transform(img) # (1, 64, 64), torch.float64
image_tensor = image_tensor[None].float() # (1, 1, 64, 64), torch.float32
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
image_tensor = image_tensor.to(device)
model = context.models.get() # get a TorchScriptModel object
with torch.no_grad():
outputs = model(image_tensor)
_, output_classes = outputs.max(dim=1)
result = MEDNIST_CLASSES[output_classes[0]] # get the class name
print(result)
# Get output (folder) path and create the folder if not exists
output_folder = op_output.get().path
output_folder.mkdir(parents=True, exist_ok=True)
# Write result to "output.json"
output_path = output_folder / "output.json"
with open(output_path, "w") as fp:
json.dump(result, fp)
###Output
_____no_output_____
###Markdown
Creating Application classOur application class would look like below.It defines `App` class inheriting `Application` class.`LoadPILOperator` is connected to `MedNISTClassifierOperator` by using `self.add_flow()` in `compose()` method of `App`.
###Code
@md.resource(cpu=1, gpu=1, memory="1Gi")
class App(Application):
"""Application class for the MedNIST classifier."""
def compose(self):
load_pil_op = LoadPILOperator()
classifier_op = MedNISTClassifierOperator()
self.add_flow(load_pil_op, classifier_op)
###Output
_____no_output_____
###Markdown
Executing app locallyLet's find a test input file path to use.
###Code
test_input_path = image_files[0][0]
print(f"Test input file path: {test_input_path}")
###Output
Test input file path: /tmp/tmpgh08b1ks/MedNIST/AbdomenCT/007000.jpeg
###Markdown
We can execute the app in the Jupyter notebook.
###Code
app = App()
app.run(input=test_input_path, output="output", model="classifier.zip")
!cat output/output.json
###Output
"AbdomenCT"
###Markdown
Once the application is verified inside Jupyter notebook, we can write the whole application as a file(`mednist_classifier_monaideploy.py`) by concatenating code above, then add the following lines:```pythonif __name__ == "__main__": App(do_run=True)```The above lines are needed to execute the application code by using `python` interpreter.
###Code
%%writefile mednist_classifier_monaideploy.py
# Copyright 2021 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import monai.deploy.core as md # 'md' stands for MONAI Deploy (or can use 'core' instead)
from monai.deploy.core import (
Application,
DataPath,
ExecutionContext,
Image,
InputContext,
IOType,
Operator,
OutputContext,
)
from monai.transforms import AddChannel, Compose, EnsureType, ScaleIntensity
MEDNIST_CLASSES = ["AbdomenCT", "BreastMRI", "CXR", "ChestCT", "Hand", "HeadCT"]
@md.input("image", DataPath, IOType.DISK)
@md.output("image", Image, IOType.IN_MEMORY)
@md.env(pip_packages=["pillow"])
class LoadPILOperator(Operator):
"""Load image from the given input (DataPath) and set numpy array to the output (Image)."""
def compute(self, op_input: InputContext, op_output: OutputContext, context: ExecutionContext):
import numpy as np
from PIL import Image as PILImage
input_path = op_input.get().path
if input_path.is_dir():
input_path = next(input_path.glob("*.*")) # take the first file
image = PILImage.open(input_path)
image = image.convert("L") # convert to greyscale image
image_arr = np.asarray(image)
output_image = Image(image_arr) # create Image domain object with a numpy array
op_output.set(output_image)
@md.input("image", Image, IOType.IN_MEMORY)
@md.output("output", DataPath, IOType.DISK)
@md.env(pip_packages=["monai"])
class MedNISTClassifierOperator(Operator):
"""Classifies the given image and returns the class name."""
@property
def transform(self):
return Compose([AddChannel(), ScaleIntensity(), EnsureType()])
def compute(self, op_input: InputContext, op_output: OutputContext, context: ExecutionContext):
import json
import torch
img = op_input.get().asnumpy() # (64, 64), uint8
image_tensor = self.transform(img) # (1, 64, 64), torch.float64
image_tensor = image_tensor[None].float() # (1, 1, 64, 64), torch.float32
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
image_tensor = image_tensor.to(device)
model = context.models.get() # get a TorchScriptModel object
with torch.no_grad():
outputs = model(image_tensor)
_, output_classes = outputs.max(dim=1)
result = MEDNIST_CLASSES[output_classes[0]] # get the class name
print(result)
# Get output (folder) path and create the folder if not exists
output_folder = op_output.get().path
output_folder.mkdir(parents=True, exist_ok=True)
# Write result to "output.json"
output_path = output_folder / "output.json"
with open(output_path, "w") as fp:
json.dump(result, fp)
@md.resource(cpu=1, gpu=1, memory="1Gi")
class App(Application):
"""Application class for the MedNIST classifier."""
def compose(self):
load_pil_op = LoadPILOperator()
classifier_op = MedNISTClassifierOperator()
self.add_flow(load_pil_op, classifier_op)
if __name__ == "__main__":
App(do_run=True)
###Output
Writing mednist_classifier_monaideploy.py
###Markdown
In this time, let's execute the app in the command line.
###Code
!python mednist_classifier_monaideploy.py -i {test_input_path} -o output -m classifier.zip
###Output
[34mGoing to initiate execution of operator LoadPILOperator[39m
[32mExecuting operator LoadPILOperator [33m(Process ID: 18193, Operator ID: de9a33aa-0abb-4e64-88af-90b27617ff63)[39m
[34mDone performing execution of operator LoadPILOperator
[39m
[34mGoing to initiate execution of operator MedNISTClassifierOperator[39m
[32mExecuting operator MedNISTClassifierOperator [33m(Process ID: 18193, Operator ID: 73bfa497-459c-4ef3-998a-8d162be57687)[39m
Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448272031/work/c10/core/TensorImpl.h:1156.)
AbdomenCT
[34mDone performing execution of operator MedNISTClassifierOperator
[39m
###Markdown
Above command is same with the following command line:
###Code
!monai-deploy exec mednist_classifier_monaideploy.py -i {test_input_path} -o output -m classifier.zip
!cat output/output.json
###Output
"AbdomenCT"
###Markdown
Packaging app Let's package the app with MONAI Application Packager.
###Code
!monai-deploy package mednist_classifier_monaideploy.py --tag mednist_app:latest --model classifier.zip # -l DEBUG
###Output
Building MONAI Application Package... Done
[2021-09-20 17:01:24,898] [INFO] (app_packager) - Successfully built mednist_app:latest
###Markdown
:::{note}Building a MONAI Application Package (Docker image) can take time. Use `-l DEBUG` option if you want to see the progress.:::We can see that the Docker image is created.
###Code
!docker image ls | grep mednist_app
###Output
mednist_app latest 8c78cc6e0966 3 seconds ago 15.3GB
###Markdown
Executing packaged app locallyThe packaged app can be run locally through MONAI Application Runner.
###Code
# Copy a test input file to 'input' folder
!mkdir -p input && rm -rf input/*
!cp {test_input_path} input/
# Launch the app
!monai-deploy run mednist_app:latest input output
!cat output/output.json
###Output
"AbdomenCT"
###Markdown
**Note**: Please execute the following script once the exercise is done.
###Code
# Remove data files which is in the temporary folder
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
Deploying a MedNIST Classifier App with MONAI Deploy App SDKThis tutorial demos the process of packaging up a trained model using MONAI Deploy App SDK into an artifact which can be run as a local program performing inference, a workflow job doing the same, and a Docker containerized workflow execution.In this tutorial, we will train a MedNIST classifier like the [MONAI tutorial here](https://github.com/Project-MONAI/tutorials/blob/master/2d_classification/mednist_tutorial.ipynb) and then implement & package the inference application, executing the application locally. Train a MedNIST classifier model with MONAI Core Setup environment
###Code
# Install necessary packages for MONAI Core
!python -c "import monai" || pip install -q "monai[pillow, tqdm]"
!python -c "import ignite" || pip install -q "monai[ignite]"
!python -c "import gdown" || pip install -q "monai[gdown]"
# Install MONAI Deploy App SDK package
!python -c "import monai.deploy" || pip install -q "monai-deploy-app-sdk"
###Output
_____no_output_____
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import shutil
import tempfile
import glob
import PIL.Image
import torch
import numpy as np
from ignite.engine import Events
from monai.apps import download_and_extract
from monai.config import print_config
from monai.networks.nets import DenseNet121
from monai.engines import SupervisedTrainer
from monai.transforms import (
AddChannel,
Compose,
LoadImage,
RandFlip,
RandRotate,
RandZoom,
ScaleIntensity,
EnsureType,
)
from monai.utils import set_determinism
set_determinism(seed=0)
print_config()
###Output
MONAI version: 0.6.0
Numpy version: 1.19.5
Pytorch version: 1.9.0
MONAI flags: HAS_EXT = False, USE_COMPILED = False
MONAI rev id: 0ad9e73639e30f4f1af5a1f4a45da9cb09930179
Optional dependencies:
Pytorch Ignite version: 0.4.5
Nibabel version: 3.2.1
scikit-image version: 0.17.2
Pillow version: 8.3.1
Tensorboard version: 2.6.0
gdown version: 3.13.0
TorchVision version: 0.10.0
ITK version: 5.2.0
tqdm version: 4.62.1
lmdb version: NOT INSTALLED or UNKNOWN VERSION.
psutil version: 5.8.0
pandas version: 1.1.5
einops version: 0.3.2
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Download datasetThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),[the RSNA Bone Age Challenge](https://www.rsna.org/education/ai-resources-and-training/ai-image-challenge/rsna-pediatric-bone-age-challenge-2017),and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).If you use the MedNIST dataset, please acknowledge the source.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
resource = "https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE"
md5 = "0bc7306e7427e00ad1c5526a6677552d"
compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
subdirs = sorted(glob.glob(f"{data_dir}/*/"))
class_names = [os.path.basename(sd[:-1]) for sd in subdirs]
image_files = [glob.glob(f"{sb}/*") for sb in subdirs]
image_files_list = sum(image_files, [])
image_class = sum(([i] * len(f) for i, f in enumerate(image_files)), [])
image_width, image_height = PIL.Image.open(image_files_list[0]).size
print(f"Label names: {class_names}")
print(f"Label counts: {list(map(len, image_files))}")
print(f"Total image count: {len(image_class)}")
print(f"Image dimensions: {image_width} x {image_height}")
###Output
Label names: ['AbdomenCT', 'BreastMRI', 'CXR', 'ChestCT', 'Hand', 'HeadCT']
Label counts: [10000, 8954, 10000, 10000, 10000, 10000]
Total image count: 58954
Image dimensions: 64 x 64
###Markdown
Setup and trainHere we'll create a transform sequence and train the network, omitting validation and testing since we know this does indeed work and it's not needed here:(train_transforms)=
###Code
train_transforms = Compose(
[
LoadImage(image_only=True),
AddChannel(),
ScaleIntensity(),
RandRotate(range_x=np.pi / 12, prob=0.5, keep_size=True),
RandFlip(spatial_axis=0, prob=0.5),
RandZoom(min_zoom=0.9, max_zoom=1.1, prob=0.5),
EnsureType(),
]
)
class MedNISTDataset(torch.utils.data.Dataset):
def __init__(self, image_files, labels, transforms):
self.image_files = image_files
self.labels = labels
self.transforms = transforms
def __len__(self):
return len(self.image_files)
def __getitem__(self, index):
return self.transforms(self.image_files[index]), self.labels[index]
# just one dataset and loader, we won't bother with validation or testing
train_ds = MedNISTDataset(image_files_list, image_class, train_transforms)
train_loader = torch.utils.data.DataLoader(train_ds, batch_size=300, shuffle=True, num_workers=10)
device = torch.device("cuda:0")
net = DenseNet121(spatial_dims=2, in_channels=1, out_channels=len(class_names)).to(device)
loss_function = torch.nn.CrossEntropyLoss()
opt = torch.optim.Adam(net.parameters(), 1e-5)
max_epochs = 5
def _prepare_batch(batch, device, non_blocking):
return tuple(b.to(device) for b in batch)
trainer = SupervisedTrainer(device, max_epochs, train_loader, net, opt, loss_function, prepare_batch=_prepare_batch)
@trainer.on(Events.EPOCH_COMPLETED)
def _print_loss(engine):
print(f"Epoch {engine.state.epoch}/{engine.state.max_epochs} Loss: {engine.state.output[0]['loss']}")
trainer.run()
###Output
Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448272031/work/c10/core/TensorImpl.h:1156.)
###Markdown
The network will be saved out here as a Torchscript object named `classifier.zip`
###Code
torch.jit.script(net).save("classifier.zip")
###Output
_____no_output_____
###Markdown
Implementing and Packaging Application with MONAI Deploy App SDKBased on the Torchscript model(`classifier.zip`), we will implement an application that process an input Jpeg image and write the prediction(classification) result as JSON file(`output.json`). Creating Operators and connecting them in Application classWe used the following [train transforms](train_transforms) as pre-transforms during the training.```{code-block} python---lineno-start: 1emphasize-lines: 3,4,5,9caption: | Train transforms used in training---train_transforms = Compose( [ LoadImage(image_only=True), AddChannel(), ScaleIntensity(), RandRotate(range_x=np.pi / 12, prob=0.5, keep_size=True), RandFlip(spatial_axis=0, prob=0.5), RandZoom(min_zoom=0.9, max_zoom=1.1, prob=0.5), EnsureType(), ])````RandRotate`, `RandFlip`, and `RandZoom` transforms are used only for training and those are not necessary during the inference.In our inference application, we will define two operators:1. `LoadPILOperator` - Load a JPEG image from the input path and pass the loaded image object to the next operator. - This Operator does similar job with `LoadImage(image_only=True)` transform in *train_transforms*, but handles only one image. - **Input**: a file path ([`DataPath`](/modules/_autosummary/monai.deploy.core.domain.DataPath)) - **Output**: an image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image))2. `MedNISTClassifierOperator` - Pre-transform the given image by using MONAI's `Compose` class, feed to the Torchscript model (`classifier.zip`), and write the prediction into JSON file(`output.json`) - Pre-transforms consist of three transforms -- `AddChannel`, `ScaleIntensity`, and `EnsureType`. - **Input**: an image object in memory ([`Image`](/modules/_autosummary/monai.deploy.core.domain.Image)) - **Output**: a folder path that the prediction result(`output.json`) would be written ([`DataPath`](/modules/_autosummary/monai.deploy.core.domain.DataPath))The workflow of the application would look like this.```{mermaid}%%{init: {"theme": "base", "themeVariables": { "fontSize": "16px"}} }%%classDiagram direction LR LoadPILOperator --|> MedNISTClassifierOperator : image...image class LoadPILOperator { image : DISK image(out) IN_MEMORY } class MedNISTClassifierOperator { image : IN_MEMORY output(out) DISK }``` Setup importsLet's import necessary classes/decorators and define `MEDNIST_CLASSES`.
###Code
from monai.deploy.core import (
Application,
DataPath,
ExecutionContext,
Image,
InputContext,
IOType,
Operator,
OutputContext,
env,
input,
output,
resource,
)
from monai.transforms import AddChannel, Compose, EnsureType, ScaleIntensity
MEDNIST_CLASSES = ["AbdomenCT", "BreastMRI", "CXR", "ChestCT", "Hand", "HeadCT"]
###Output
_____no_output_____
###Markdown
Creating Operator classes LoadPILOperator
###Code
@input("image", DataPath, IOType.DISK)
@output("image", Image, IOType.IN_MEMORY)
@env(pip_packages=["pillow"])
class LoadPILOperator(Operator):
"""Load image from the given input (DataPath) and set numpy array to the output (Image)."""
def compute(self, input: InputContext, output: OutputContext, context: ExecutionContext):
import numpy as np
from PIL import Image as PILImage
input_path = input.get().path
image = PILImage.open(input_path)
image = image.convert("L") # convert to greyscale image
image_arr = np.asarray(image)
output_image = Image(image_arr) # create Image domain object with a numpy array
output.set(output_image)
###Output
_____no_output_____
###Markdown
MedNISTClassifierOperator
###Code
@input("image", Image, IOType.IN_MEMORY)
@output("output", DataPath, IOType.DISK)
@env(pip_packages=["monai"])
class MedNISTClassifierOperator(Operator):
"""Classifies the given image and returns the class name."""
@property
def transform(self):
return Compose([AddChannel(), ScaleIntensity(), EnsureType()])
def compute(self, input: InputContext, output: OutputContext, context: ExecutionContext):
import json
import torch
img = input.get().asnumpy() # (64, 64), uint8
image_tensor = self.transform(img) # (1, 64, 64), torch.float64
image_tensor = image_tensor[None].float() # (1, 1, 64, 64), torch.float32
# Comment below line if you want to do CPU inference
image_tensor = image_tensor.cuda()
model = context.models.get() # get a TorchScriptModel object
# Uncomment the following line if you want to do CPU inference
# model.predictor = torch.jit.load(model.path, map_location="cpu").eval()
with torch.no_grad():
outputs = model(image_tensor)
_, output_classes = outputs.max(dim=1)
result = MEDNIST_CLASSES[output_classes[0]] # get the class name
print(result)
# Get output (folder) path and create the folder if not exists
output_folder = output.get().path
output_folder.mkdir(parents=True, exist_ok=True)
# Write result to "output.json"
output_path = output_folder / "output.json"
with open(output_path, "w") as fp:
json.dump(result, fp)
###Output
_____no_output_____
###Markdown
Creating Application classOur application class would look like below.It defines `App` class inheriting `Application` class.`LoadPILOperator` is connected to `MedNISTClassifierOperator` by using `self.add_flow()` in `compose()` method of `App`.
###Code
@resource(cpu=1, gpu=1, memory="1Gi")
class App(Application):
"""Application class for the MedNIST classifier."""
def compose(self):
load_pil_op = LoadPILOperator()
classifier_op = MedNISTClassifierOperator()
self.add_flow(load_pil_op, classifier_op)
###Output
_____no_output_____
###Markdown
Executing app locallyLet's find a test input file path to use.
###Code
test_input_path = image_files[0][0]
print(f"Test input file path: {test_input_path}")
###Output
Test input file path: /tmp/tmpfnkekqlj/MedNIST/AbdomenCT/007000.jpeg
###Markdown
We can execute the app in the Jupyter notebook.
###Code
app = App()
app.run(input=test_input_path, output="output", model="classifier.zip")
!cat output/output.json
###Output
"AbdomenCT"
###Markdown
Once the application is verified inside Jupyter notebook, we can write the whole application as a file(`mednist_classifier_monaideploy.py`) by concatenating code above, then add the following lines:```pythonif __name__ == "__main__": App(do_run=True)```Above lines are needed to execute the application code by using `python` interpreter.
###Code
%%writefile mednist_classifier_monaideploy.py
# Copyright 2021 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from monai.deploy.core import (
Application,
DataPath,
ExecutionContext,
Image,
InputContext,
IOType,
Operator,
OutputContext,
env,
input,
output,
resource,
)
from monai.transforms import AddChannel, Compose, EnsureType, ScaleIntensity
MEDNIST_CLASSES = ["AbdomenCT", "BreastMRI", "CXR", "ChestCT", "Hand", "HeadCT"]
@input("image", DataPath, IOType.DISK)
@output("image", Image, IOType.IN_MEMORY)
@env(pip_packages=["pillow"])
class LoadPILOperator(Operator):
"""Load image from the given input (DataPath) and set numpy array to the output (Image)."""
def compute(self, input: InputContext, output: OutputContext, context: ExecutionContext):
import numpy as np
from PIL import Image as PILImage
input_path = input.get().path
image = PILImage.open(input_path)
image = image.convert("L") # convert to greyscale image
image_arr = np.asarray(image)
output_image = Image(image_arr) # create Image domain object with a numpy array
output.set(output_image)
@input("image", Image, IOType.IN_MEMORY)
@output("output", DataPath, IOType.DISK)
@env(pip_packages=["monai"])
class MedNISTClassifierOperator(Operator):
"""Classifies the given image and returns the class name."""
@property
def transform(self):
return Compose([AddChannel(), ScaleIntensity(), EnsureType()])
def compute(self, input: InputContext, output: OutputContext, context: ExecutionContext):
import json
import torch
img = input.get().asnumpy() # (64, 64), uint8
image_tensor = self.transform(img) # (1, 64, 64), torch.float64
image_tensor = image_tensor[None].float() # (1, 1, 64, 64), torch.float32
# Comment below line if you want to do CPU inference
image_tensor = image_tensor.cuda()
model = context.models.get() # get a TorchScriptModel object
# Uncomment the following line if you want to do CPU inference
# model.predictor = torch.jit.load(model.path, map_location="cpu").eval()
with torch.no_grad():
outputs = model(image_tensor)
_, output_classes = outputs.max(dim=1)
result = MEDNIST_CLASSES[output_classes[0]] # get the class name
print(result)
# Get output (folder) path and create the folder if not exists
output_folder = output.get().path
output_folder.mkdir(parents=True, exist_ok=True)
# Write result to "output.json"
output_path = output_folder / "output.json"
with open(output_path, "w") as fp:
json.dump(result, fp)
@resource(cpu=1, gpu=1, memory="1Gi")
class App(Application):
"""Application class for the MedNIST classifier."""
def compose(self):
load_pil_op = LoadPILOperator()
classifier_op = MedNISTClassifierOperator()
self.add_flow(load_pil_op, classifier_op)
if __name__ == "__main__":
App(do_run=True)
###Output
Writing mednist_classifier_monaideploy.py
###Markdown
In this time, let's execute the app in the command line.
###Code
!python mednist_classifier_monaideploy.py -i {test_input_path} -o output -m classifier.zip
###Output
[34mGoing to initiate execution of operator LoadPILOperator[39m
[32mExecuting operator LoadPILOperator [33m(Process ID: 31291, Operator ID: 3d72e773-6fb4-4943-a007-24949a1da98c)[39m
[34mDone performing execution of operator LoadPILOperator
[39m
[34mGoing to initiate execution of operator MedNISTClassifierOperator[39m
[32mExecuting operator MedNISTClassifierOperator [33m(Process ID: 31291, Operator ID: 5d5b6935-5bd0-4e2b-984d-b66887e55c09)[39m
Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448272031/work/c10/core/TensorImpl.h:1156.)
AbdomenCT
[34mDone performing execution of operator MedNISTClassifierOperator
[39m
###Markdown
Above command is same with the following command line:
###Code
!monai-deploy exec mednist_classifier_monaideploy.py -i {test_input_path} -o output -m classifier.zip
!cat output/output.json
###Output
"AbdomenCT"
###Markdown
Packaging app Let's package the app with [MONAI Application Packager](/developing_with_sdk/packaging_app).
###Code
!monai-deploy package mednist_classifier_monaideploy.py --tag mednist_app:latest --model classifier.zip # -l DEBUG
###Output
Building MONAI Application Package... Done
[2021-09-15 01:30:45,783] [INFO] (app_packager) - Successfully built mednist_app:latest
###Markdown
:::{note}Building a MONAI Application Package (Docker image) can take time. Use `-l DEBUG` option if you want to see the progress.:::We can see that the Docker image is created.
###Code
!docker image ls | grep mednist_app
###Output
mednist_app latest cec05c8f652d 3 seconds ago 15.3GB
###Markdown
Executing packaged app locallyThe packaged app can be run locally through [MONAI Application Runner](/developing_with_sdk/executing_packaged_app_locally).
###Code
!monai-deploy run mednist_app:latest {test_input_path} output
!cat output/output.json
###Output
"AbdomenCT"
###Markdown
**Note**: Please execute the following script once the exercise is done.
###Code
# Remove data files which is in the temporary folder
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____ |
improved_GAN/LSGAN.ipynb | ###Markdown
LSGANGAN 이 훈련시키가 어려운 이유는 대체적으로 손실함수를 최적화 시킬때 발생한다. Jensen-Shannon 발산을 최적화 하는 것이 GAN 의 당면 과제이며, 두 분포함수가 중첩되는 부분이 거의 없을 경우에는 이를 최적화하기에는 어렵다. WGAN 의 경우, 두 분포사이에 중첩되는 영역이 거의 없을 때도 매끄러운 미분 가능함수를 갖도록 EMD 나 Wasserstein 손실을 사용함으로써 이문제를 해결한다.하지만 WGAN 은 생성 이미지의 품질에는 신경쓰지 않는 경향이 있고 이와 같은 부분에서 개선되어야 하는 점이 있다. LSGAN은 최소제곱 손실로서 위의 문제를 해결하는 방법이다. GAN 에서 Sigmoid Activation 과 Cross Entropy 손실함수를 사용했을때 생성된 데이터 품질이 나쁜 이유는 , 이상적으로, 가짜 샘플의 분포는 진짜 샘플의 분포와 가능한 가까워야 한다. 하지만, GAN 에서 가짜 샘플이 이미 결정경계에서 진짜로 분류가 되기 시작했을 경우, 경사가 소실된다.이는 생성기가 생성된 가짜 데이터의 품질을 개선하려고 더 노력할 필요가 없게 만들며, 결정 경계로 부터 멀리 떨어져 있는 가짜 샘플은 더 이상 진짜 샘플 분포에 가까워 지려고 시도하지 않는다.이 때, 최소제곱 손실을 사용한다면, 생성기는 가짜 샘플이 이미 결정 경계의 진짜 영역에 속해 있더라도, 실제 밀도 분포의 추정을 개선하려고 학습한다. GPU 할당 Colab 이 아닌 환경 (GPU 의 메모리가 부족할 경우)에서는 아래의 코드를 통해 우선적으로 gpu를 할당해준다
###Code
import tensorflow as tf
physical_devices =tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0],True)
from tensorflow.keras.layers import Activation, Dense, Input
from tensorflow.keras.layers import Conv2D, Flatten
from tensorflow.keras.layers import Reshape, Conv2DTranspose
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import concatenate
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import load_model
from tensorflow.keras import backend as K
import numpy as np
import math
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
생성기와 판별기 등 함수 구성 생성기와 판별기의 함수는 DCGAN의 구성을 그대로 사용한다.
###Code
def build_generator(inputs,
image_size,
activation='sigmoid'):
image_resize = image_size // 4
kernel_size = 5
layer_filters = [128, 64, 32, 1]
x = inputs
x = Dense(image_resize * image_resize * layer_filters[0])(x)
x = Reshape((image_resize, image_resize, layer_filters[0]))(x)
for filters in layer_filters:
if filters > layer_filters[-2]:
strides = 2
else:
strides = 1
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2DTranspose(filters=filters,
kernel_size=kernel_size,
strides=strides,
padding='same')(x)
if activation is not None:
x = Activation(activation)(x)
return Model(inputs, x, name='generator')
def build_discriminator(inputs,
activation='sigmoid'):
kernel_size = 5
layer_filters = [32, 64, 128, 256]
x = inputs
for filters in layer_filters:
if filters == layer_filters[-1]:
strides = 1
else:
strides = 2
x = LeakyReLU(alpha=0.2)(x)
x = Conv2D(filters=filters,
kernel_size=kernel_size,
strides=strides,
padding='same')(x)
x = Flatten()(x)
outputs = Dense(1)(x)
if activation is not None:
print(activation)
outputs = Activation(activation)(outputs)
return Model(inputs, outputs, name='discriminator')
def plot_images(generator,
noise_input,
noise_label=None,
noise_codes=None,
show=False,
step=0,
model_name="gan"):
os.makedirs(model_name, exist_ok=True)
filename = os.path.join(model_name, "%05d.png" % step)
rows = int(math.sqrt(noise_input.shape[0]))
if noise_label is not None:
noise_input = [noise_input, noise_label]
if noise_codes is not None:
noise_input += noise_codes
images = generator.predict(noise_input)
plt.figure(figsize=(2.2, 2.2))
num_images = images.shape[0]
image_size = images.shape[1]
for i in range(num_images):
plt.subplot(rows, rows, i + 1)
image = np.reshape(images[i], [image_size, image_size])
plt.imshow(image, cmap='gray')
plt.axis('off')
plt.savefig(filename)
if show:
plt.show()
else:
plt.close('all')
def test_generator(generator):
noise_input = np.random.uniform(-1.0, 1.0, size=[16, 100])
plot_images(generator,
noise_input=noise_input,
show=True,
model_name="test_outputs")
###Output
_____no_output_____
###Markdown
LSGAN 구현 DCGAN 과 거의 동일한 구조이며, 아래와 같이 손실함수를 모두 mse 로 대체하고, Activation 층을 제거 해주면 된다. LSGAN 네트워크 모델은 선형출력 혹은 화성화 함수가 없다는 점에서 다른 성능을 나타낸다.
###Code
#MNIST 데이터 세트 로딩
(x_train,_),(_,_) = mnist.load_data()
# 데이터 형상 변환 및 정규화
image_size = x_train.shape[1]
x_train = np.reshape(x_train, [-1, image_size, image_size, 1])
x_train = x_train.astype('float32')/255
model_name = "lsgan_mnist"
#네트워크 매개 변수 지정
latent_size = 100
batch_size = 64
lr = 2e-4
decay = 6e-8
train_steps = 40000
input_shape = (image_size, image_size, 1)
#판별기 모델 구성
inputs = Input(shape=input_shape, name = 'discriminator_input')
discriminator = build_discriminator(inputs, activation=None)
optimizer = RMSprop(lr=lr, decay=decay)
#LSGAN 은 mse loss 를 사용한다.
discriminator.compile(loss='mse',optimizer=optimizer,metrics=['accuracy'])
discriminator.summary()
#생성기 모델 구성
input_shape = (latent_size,)
inputs = Input(shape=input_shape, name = 'z_input')
generator = build_generator(inputs, image_size)
generator.summary()
#적대적 모델 구성
optimizer = RMSprop(lr=lr*0.5, decay=decay*0.5)
#적대적 네트워크를 훈련하는 동안 판별기의 가중치는 고정
discriminator.trainable = False
adversarial = Model(inputs, discriminator(generator(inputs)),name = model_name)
adversarial.compile(loss='mse', optimizer=optimizer,metrics=['accuracy'])
adversarial.summary()
models = (generator, discriminator, adversarial)
params = (batch_size, latent_size, train_steps, model_name)
###Output
_____no_output_____
###Markdown
LSGAN 훈련
###Code
def train(models, x_train, params):
# 함수의 인수로는 앞선 셀에서 리스트로 지정된 models 와 params, 그리고 훈련 이미지인 x_train 이 있다.
"""
판별기와 적대적 네트워크를 훈련한 후, 배치 단위로 판별기와 적대적 네트워크를 교대로 훈련한다.
우선 판별기는 제대로 레이블이 붙은 진짜와 가짜 이미지를 가지고 훈련 시킨 후,
다음으로 적대적 네트워크를 진짜인척 하는 가짜 이미지를 사용하여 훈련시킨다.
"""
# GAN 모델 불러오기
generator, discriminator, adversarial = models
# 네트워크 매개변수
batch_size, latent_size, train_steps, model_name = params
# 500 단계 마다 생성기 이미지가 저장되도록 설정
save_interval = 500
#훈련 기간 동안 생성기 출력 이미지가 어떻게 변화하는지 보여주기 위한 노이즈 벡터
noise_input = np.random.uniform(-1.0, 1.0, size = [16, latent_size])
# 훈련 이미지의 개수
train_size = x_train.shape[0]
for i in range(train_steps):
# 1 배치에 대해 판별기 훈련
# 데이터 셋에서 임의로 진짜 이미지를 선택한다
rand_indices = np.random.randint(0, train_size, size = batch_size)
real_images = x_train[rand_indices]
#생성기를 사용해 노이즈로 부터 가짜 이미지를 생성한다.
# 노이즈 분포 사용해 노이즈 생성
noise = np.random.uniform(-1.0, 1.0, size=[batch_size, latent_size])
# 가짜 이미지 생성
fake_images = generator.predict(noise)
# 진짜 이미지와 가짜이미지의 훈련데이터의 배치
x = np.concatenate((real_images, fake_images))
# 레이블을 붙임
y = np.ones([2*batch_size, 1])
y[batch_size:, :] = 0.0
#판별기 훈련 및 손실과 정확도 기록
loss, acc = discriminator.train_on_batch(x, y)
log = "%d:[discriminator loss = %f, acc: %f]" %(i, loss, acc)
# 1 배치에 대한 적대적 네트워크 훈련
# label = 1.0 인 가짜 이미지로 구성된 배치
#판별기의 가중치가 고정되므로 생성기만 훈련된다.
noise = np.random.uniform(-1.0, 1.0, size=[batch_size, latent_size])
# 가짜 이미지에 진짜 혹은 1.0 으로 레이블
y = np.ones([batch_size, 1])
#판별기를 훈련시키는 것과 달리 변수에 가짜 이미지를 저장하지 않는다.
#가짜 이미지는 분류를 위해 적대적 네트워크의 판별기 입력으로 전달됨
loss, acc = adversarial.train_on_batch(noise, y)
log = "%s:[adversarial loss = %f, acc: %f]" %(log, loss, acc)
print(log)
if (i+1) % save_interval == 0:
if (i+1) == train_steps:
show = True
else:
show = False
plot_images(generator, noise_input=noise_input, show=show, step=(i+1), model_name=model_name)
genrator.save(model_name+".h5")
train(models, x_train, params)
###Output
_____no_output_____ |
python-sdk/experimental/deploy-edge/ase-gpu.ipynb | ###Markdown
Deploying a ML model as web service on Azure StackThis notebook shows the steps to : registering a model, creating an image, provisioning,deploying a service using Iot Edge on Azure Edge.
###Code
!pip install --upgrade azureml-contrib-services tensorflow
###Output
_____no_output_____
###Markdown
Get workspace
###Code
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
ws
###Output
_____no_output_____
###Markdown
Download the modelPrior to registering the model, you should have a TensorFlow [Saved Model](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md) in the `resnet50` directory. This cell will download a [pretrained resnet50](http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz) and unpack it to that directory.
###Code
import os
import requests
import shutil
import tarfile
import tempfile
from io import BytesIO
model_url = "http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz"
archive_prefix = "./resnet_v1_fp32_savedmodel_NCHW_jpg/1538686758/"
target_folder = "resnet50"
if not os.path.exists(target_folder):
response = requests.get(model_url)
archive = tarfile.open(fileobj=BytesIO(response.content))
with tempfile.TemporaryDirectory() as temp_folder:
archive.extractall(temp_folder)
shutil.copytree(os.path.join(temp_folder, archive_prefix), target_folder)
###Output
_____no_output_____
###Markdown
Register the modelRegister an existing trained model, add description and tags.
###Code
from azureml.core.model import Model
model = Model.register(
model_path="resnet50", # This points to the local directory to upload.
model_name="resnet50", # This is the name the model is registered as.
tags={"area": "Image classification", "type": "classification"},
description="Image classification trained on Imagenet Dataset",
workspace=ws,
)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
Deploy the model as a web service to EdgeWe begin by writing a score.py file that will be invoked by the web service call. The init() function is called once when the container is started so we load the model using the Tensorflow session. The run() function is called when the webservice is invoked for inferencing. See [score.py](src/score.py). Now create the deployment configuration objects
###Code
# Set the web service configuration (using default here)
from azureml.core.model import InferenceConfig
# from azureml.core.webservice import AksWebservice
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.environment import Environment, DEFAULT_GPU_IMAGE
env = Environment("deploytoedgeenv")
# Please see [Azure ML Containers repository](https://github.com/Azure/AzureML-Containers#featured-tags)
# for open-sourced GPU base images.
env.docker.base_image = DEFAULT_GPU_IMAGE
env.python.conda_dependencies = CondaDependencies.create(
conda_packages=["tensorflow-gpu==1.12.0", "numpy"],
pip_packages=["azureml-contrib-services", "azureml-defaults"],
)
inference_config = InferenceConfig(
source_directory="src", entry_script="score.py", environment=env
)
###Output
_____no_output_____
###Markdown
Create container image in Azure MLUse Azure ML to create the container image. This step will likely take a few minutes.
###Code
# provide name of azure contaienr image and tag
imagename = "tfgpu"
imagelabel = "0.2"
# Builds an image in ACR.
package = Model.package(
ws,
[model],
inference_config=inference_config,
image_name=imagename,
image_label=imagelabel,
)
package.wait_for_creation(show_output=True)
print("ACR:", package.get_container_registry)
print("Image:", package.location)
###Output
_____no_output_____
###Markdown
Setup Azure Stack Edge Follow [documentation](https://review.docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-sample-module-marketplace?branch=release-preview-ase-gpu) to setup compute and validate if GPU on ASE are up and runing. Setup Azure IoT Edge deviceFollow [documentation](https://docs.microsoft.com/en-us/azure/iot-edge/quickstart-linux) to setup a Linux VM as an Azure IoT Edge device Deploy container to Azure IoT Edge device
###Code
acr_name = package.location.split("/")[0]
reg_name = acr_name.split(".")[0]
subscription_id = ws.subscription_id
print("{}".format(acr_name))
print("{}".format(subscription_id))
# TODO: Derive image_location through code.
image_location = acr_name + "/" + imagename + ":" + imagelabel
print("{}".format(image_location))
# Fetch username, password of ACR.
from azure.mgmt.containerregistry import ContainerRegistryManagementClient
from azure.mgmt import containerregistry
client = ContainerRegistryManagementClient(ws._auth, subscription_id)
result = client.registries.list_credentials(
ws.resource_group, reg_name, custom_headers=None, raw=False
)
username = result.username
password = result.passwords[0].value
###Output
_____no_output_____
###Markdown
Create a deployment.json file using the template json. Then push the deployment json file to the IoT Hub, which will then send it to the IoT Edge device. The IoT Edge agent will then pull the Docker images and run them.
###Code
module_name = "tfgpu"
file = open("src/iotedge-tf-template-gpu.json")
contents = file.read()
contents = contents.replace("__MODULE_NAME", module_name)
contents = contents.replace("__REGISTRY_NAME", reg_name)
contents = contents.replace("__REGISTRY_USER_NAME", username)
contents = contents.replace("__REGISTRY_PASSWORD", password)
contents = contents.replace("__REGISTRY_IMAGE_LOCATION", image_location)
with open("deployment_gpu.json", "wt", encoding="utf-8") as output_file:
output_file.write(contents)
###Output
_____no_output_____
###Markdown
Sending deployment ot the edge device
###Code
## working example !az iot edge set-modules --device-id juanedge --hub-name yadavmAiMLGpu --content deployment_gpu.json
# UNCOMMENT TO RUN, once you put your device's info
#!az iot edge set-modules --device-id <replace with iot edger device name> --hub-name <repalce with iot hub name> --content deployment_gpu.json
###Output
_____no_output_____
###Markdown
Test the web serviceWe test the web sevice by passing the test images content.
###Code
# downloading labels for imagenet that resnet model was trained on
import requests
classes_entries = requests.get(
"https://raw.githubusercontent.com/Lasagne/Recipes/master/examples/resnet50/imagenet_classes.txt"
).text.splitlines()
%%time
import requests
## Run it like so, for example:
# do_inference("snowleopardgaze.jpg", "http://51.141.178.47:5001/score")
def do_inference(myfilename, myscoring_uri):
test_sample = open(myfilename, "rb").read()
try:
scoring_uri = (
# You can construct your own, passing only the ip in arguments
# "http://<replace with yout edge device ip address>:5001/score"
#
myscoring_uri
)
# Set the content type
headers = {"Content-Type": "application/json"}
# Make the request
resp = requests.post(scoring_uri, test_sample, headers=headers)
print("Found a ::" + classes_entries[int(resp.text.strip("[]")) - 1])
except KeyError as e:
print(str(e))
###Output
_____no_output_____
###Markdown
Deploying a ML model as web service on Azure StackThis notebook shows the steps to : registering a model, creating an image, provisioning,deploying a service using Iot Edge on Azure Edge.
###Code
!pip install --upgrade azureml-contrib-services tensorflow
###Output
_____no_output_____
###Markdown
Get workspace
###Code
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
ws
###Output
_____no_output_____
###Markdown
Download the modelPrior to registering the model, you should have a TensorFlow [Saved Model](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md) in the `resnet50` directory. This cell will download a [pretrained resnet50](http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz) and unpack it to that directory.
###Code
import os
import requests
import shutil
import tarfile
import tempfile
from io import BytesIO
model_url = "http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz"
archive_prefix = "./resnet_v1_fp32_savedmodel_NCHW_jpg/1538686758/"
target_folder = "resnet50"
if not os.path.exists(target_folder):
response = requests.get(model_url)
archive = tarfile.open(fileobj=BytesIO(response.content))
with tempfile.TemporaryDirectory() as temp_folder:
archive.extractall(temp_folder)
shutil.copytree(os.path.join(temp_folder, archive_prefix), target_folder)
###Output
_____no_output_____
###Markdown
Register the modelRegister an existing trained model, add description and tags.
###Code
from azureml.core.model import Model
model = Model.register(
model_path="resnet50", # This points to the local directory to upload.
model_name="resnet50", # This is the name the model is registered as.
tags={"area": "Image classification", "type": "classification"},
description="Image classification trained on Imagenet Dataset",
workspace=ws,
)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
Deploy the model as a web service to EdgeWe begin by writing a score.py file that will be invoked by the web service call. The init() function is called once when the container is started so we load the model using the Tensorflow session. The run() function is called when the webservice is invoked for inferencing. See [score.py](src/score.py). Now create the deployment configuration objects
###Code
# Set the web service configuration (using default here)
from azureml.core.model import InferenceConfig
# from azureml.core.webservice import AksWebservice
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.environment import Environment, DEFAULT_GPU_IMAGE
env = Environment("deploytoedgeenv")
# Please see [Azure ML Containers repository](https://github.com/Azure/AzureML-Containers#featured-tags)
# for open-sourced GPU base images.
env.docker.base_image = DEFAULT_GPU_IMAGE
env.python.conda_dependencies = CondaDependencies.create(
conda_packages=["tensorflow-gpu==1.12.0", "numpy"],
pip_packages=["azureml-contrib-services", "azureml-defaults"],
)
inference_config = InferenceConfig(
source_directory="src", entry_script="score.py", environment=env
)
###Output
_____no_output_____
###Markdown
Create container image in Azure MLUse Azure ML to create the container image. This step will likely take a few minutes.
###Code
# provide name of azure contaienr image and tag
imagename = "tfgpu"
imagelabel = "0.2"
# Builds an image in ACR.
package = Model.package(
ws,
[model],
inference_config=inference_config,
image_name=imagename,
image_label=imagelabel,
)
package.wait_for_creation(show_output=True)
print("ACR:", package.get_container_registry)
print("Image:", package.location)
###Output
_____no_output_____
###Markdown
Setup Azure Stack Edge Follow [documentation](https://review.docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-sample-module-marketplace?branch=release-preview-ase-gpu) to setup compute and validate if GPU on ASE are up and runing. Setup Azure IoT Edge deviceFollow [documentation](https://docs.microsoft.com/en-us/azure/iot-edge/quickstart-linux) to setup a Linux VM as an Azure IoT Edge device Deploy container to Azure IoT Edge device
###Code
acr_name = package.location.split("/")[0]
reg_name = acr_name.split(".")[0]
subscription_id = ws.subscription_id
print("{}".format(acr_name))
print("{}".format(subscription_id))
# TODO: Derive image_location through code.
image_location = acr_name + "/" + imagename + ":" + imagelabel
print("{}".format(image_location))
# Fetch username, password of ACR.
from azure.mgmt.containerregistry import ContainerRegistryManagementClient
from azure.mgmt import containerregistry
client = ContainerRegistryManagementClient(ws._auth, subscription_id)
result = client.registries.list_credentials(
ws.resource_group, reg_name, custom_headers=None, raw=False
)
username = result.username
password = result.passwords[0].value
###Output
_____no_output_____
###Markdown
Create a deployment.json file using the template json. Then push the deployment json file to the IoT Hub, which will then send it to the IoT Edge device. The IoT Edge agent will then pull the Docker images and run them.
###Code
module_name = "tfgpu"
file = open("src/iotedge-tf-template-gpu.json")
contents = file.read()
contents = contents.replace("__MODULE_NAME", module_name)
contents = contents.replace("__REGISTRY_NAME", reg_name)
contents = contents.replace("__REGISTRY_USER_NAME", username)
contents = contents.replace("__REGISTRY_PASSWORD", password)
contents = contents.replace("__REGISTRY_IMAGE_LOCATION", image_location)
with open("deployment_gpu.json", "wt", encoding="utf-8") as output_file:
output_file.write(contents)
###Output
_____no_output_____
###Markdown
Sending deployment ot the edge device
###Code
## working example !az iot edge set-modules --device-id juanedge --hub-name yadavmAiMLGpu --content deployment_gpu.json
# UNCOMMENT TO RUN, once you put your device's info
#!az iot edge set-modules --device-id <replace with iot edger device name> --hub-name <repalce with iot hub name> --content deployment_gpu.json
###Output
_____no_output_____
###Markdown
Test the web serviceWe test the web sevice by passing the test images content.
###Code
# downloading labels for imagenet that resnet model was trained on
import requests
classes_entries = requests.get(
"https://raw.githubusercontent.com/Lasagne/Recipes/master/examples/resnet50/imagenet_classes.txt"
).text.splitlines()
%%time
import requests
## Run it like so, for example:
# do_inference("snowleopardgaze.jpg", "http://51.141.178.47:5001/score")
def do_inference(myfilename, myscoring_uri):
test_sample = open(myfilename, "rb").read()
try:
scoring_uri = (
# You can construct your own, passing only the ip in arguments
# "http://<replace with yout edge device ip address>:5001/score"
#
myscoring_uri
)
# Set the content type
headers = {"Content-Type": "application/json"}
# Make the request
resp = requests.post(scoring_uri, test_sample, headers=headers)
print("Found a ::" + classes_entries[int(resp.text.strip("[]")) - 1])
except KeyError as e:
print(str(e))
###Output
_____no_output_____
###Markdown
Deploying a ML model as web service on Azure StackThis notebook shows the steps to : registering a model, creating an image, provisioning,deploying a service using Iot Edge on Azure Edge.
###Code
!pip install --upgrade azureml-contrib-services tensorflow
###Output
_____no_output_____
###Markdown
Get workspace
###Code
from azureml.core.workspace import Workspace
ws = Workspace.from_config()
ws
###Output
_____no_output_____
###Markdown
Download the modelPrior to registering the model, you should have a TensorFlow [Saved Model](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md) in the `resnet50` directory. This cell will download a [pretrained resnet50](http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz) and unpack it to that directory.
###Code
import os
import requests
import shutil
import tarfile
import tempfile
from io import BytesIO
model_url = "http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v1_fp32_savedmodel_NCHW_jpg.tar.gz"
archive_prefix = "./resnet_v1_fp32_savedmodel_NCHW_jpg/1538686758/"
target_folder = "resnet50"
if not os.path.exists(target_folder):
response = requests.get(model_url)
archive = tarfile.open(fileobj=BytesIO(response.content))
with tempfile.TemporaryDirectory() as temp_folder:
archive.extractall(temp_folder)
shutil.copytree(os.path.join(temp_folder, archive_prefix), target_folder)
###Output
_____no_output_____
###Markdown
Register the modelRegister an existing trained model, add description and tags.
###Code
from azureml.core.model import Model
model = Model.register(
model_path="resnet50", # This points to the local directory to upload.
model_name="resnet50", # This is the name the model is registered as.
tags={"area": "Image classification", "type": "classification"},
description="Image classification trained on Imagenet Dataset",
workspace=ws,
)
print(model.name, model.description, model.version)
###Output
_____no_output_____
###Markdown
Deploy the model as a web service to EdgeWe begin by writing a score.py file that will be invoked by the web service call. The init() function is called once when the container is started so we load the model using the Tensorflow session. The run() function is called when the webservice is invoked for inferencing. See [score.py](src/score.py). Now create the deployment configuration objects
###Code
# Set the web service configuration (using default here)
from azureml.core.model import InferenceConfig
# from azureml.core.webservice import AksWebservice
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.environment import Environment, DEFAULT_GPU_IMAGE
env = Environment("deploytoedgeenv")
# Please see [Azure ML Containers repository](https://github.com/Azure/AzureML-Containers#featured-tags)
# for open-sourced GPU base images.
env.docker.base_image = DEFAULT_GPU_IMAGE
env.python.conda_dependencies = CondaDependencies.create(
conda_packages=["tensorflow-gpu==1.12.0", "numpy"],
pip_packages=["azureml-contrib-services", "azureml-defaults"],
)
inference_config = InferenceConfig(
source_directory="src", entry_script="score.py", environment=env
)
###Output
_____no_output_____
###Markdown
Create container image in Azure MLUse Azure ML to create the container image. This step will likely take a few minutes.
###Code
# provide name of azure contaienr image and tag
imagename = "tfgpu"
imagelabel = "0.2"
# Builds an image in ACR.
package = Model.package(
ws,
[model],
inference_config=inference_config,
image_name=imagename,
image_label=imagelabel,
)
package.wait_for_creation(show_output=True)
print("ACR:", package.get_container_registry)
print("Image:", package.location)
###Output
_____no_output_____
###Markdown
Setup Azure Stack Edge Follow [documentation](https://review.docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-gpu-deploy-sample-module-marketplace?branch=release-preview-ase-gpu) to setup compute and validate if GPU on ASE are up and runing. Setup Azure IoT Edge deviceFollow [documentation](https://docs.microsoft.com/en-us/azure/iot-edge/quickstart-linux) to setup a Linux VM as an Azure IoT Edge device Deploy container to Azure IoT Edge device
###Code
acr_name = package.location.split("/")[0]
reg_name = acr_name.split(".")[0]
subscription_id = ws.subscription_id
print("{}".format(acr_name))
print("{}".format(subscription_id))
# TODO: Derive image_location through code.
image_location = acr_name + "/" + imagename + ":" + imagelabel
print("{}".format(image_location))
# Fetch username, password of ACR.
from azure.mgmt.containerregistry import ContainerRegistryManagementClient
from azure.mgmt import containerregistry
client = ContainerRegistryManagementClient(ws._auth, subscription_id)
result = client.registries.list_credentials(
ws.resource_group, reg_name, custom_headers=None, raw=False
)
username = result.username
password = result.passwords[0].value
###Output
_____no_output_____
###Markdown
Create a deployment.json file using the template json. Then push the deployment json file to the IoT Hub, which will then send it to the IoT Edge device. The IoT Edge agent will then pull the Docker images and run them.
###Code
module_name = "tfgpu"
file = open("src/iotedge-tf-template-gpu.json")
contents = file.read()
contents = contents.replace("__MODULE_NAME", module_name)
contents = contents.replace("__REGISTRY_NAME", reg_name)
contents = contents.replace("__REGISTRY_USER_NAME", username)
contents = contents.replace("__REGISTRY_PASSWORD", password)
contents = contents.replace("__REGISTRY_IMAGE_LOCATION", image_location)
with open("deployment_gpu.json", "wt", encoding="utf-8") as output_file:
output_file.write(contents)
###Output
_____no_output_____
###Markdown
Sending deployment ot the edge device
###Code
## working example !az iot edge set-modules --device-id juanedge --hub-name yadavmAiMLGpu --content deployment_gpu.json
# UNCOMMENT TO RUN, once you put your device's info
#!az iot edge set-modules --device-id <replace with iot edger device name> --hub-name <repalce with iot hub name> --content deployment_gpu.json
###Output
_____no_output_____
###Markdown
Test the web serviceWe test the web sevice by passing the test images content.
###Code
# downloading labels for imagenet that resnet model was trained on
import requests
classes_entries = requests.get(
"https://raw.githubusercontent.com/Lasagne/Recipes/master/examples/resnet50/imagenet_classes.txt"
).text.splitlines()
%%time
import requests
## Run it like so, for example:
# do_inference("snowleopardgaze.jpg", "http://51.141.178.47:5001/score")
def do_inference(myfilename, myscoring_uri):
test_sample = open(myfilename, "rb").read()
try:
scoring_uri = (
# You can construct your own, passing only the ip in arguments
# "http://<replace with yout edge device ip address>:5001/score"
#
myscoring_uri
)
# Set the content type
headers = {"Content-Type": "application/json"}
# Make the request
resp = requests.post(scoring_uri, test_sample, headers=headers)
print("Found a ::" + classes_entries[int(resp.text.strip("[]")) - 1])
except KeyError as e:
print(str(e))
###Output
_____no_output_____ |
classes/04 unsupervised/04_unsupervised_04.ipynb | ###Markdown
A demo of K-Means clustering on the handwritten digits dataIn this example we compare the various initialization strategies for K-means interms of runtime and quality of the results.As the ground truth is known here, we also apply different cluster qualitymetrics to judge the goodness of fit of the cluster labels to the ground truth.Cluster quality metrics evaluated (see `clustering_evaluation` fordefinitions and discussions of the metrics):=========== ========================================================Shorthand full name=========== ========================================================homo homogeneity scorecompl completeness scorev-meas V measureARI adjusted Rand indexAMI adjusted mutual informationsilhouette silhouette coefficient=========== ======================================================== Load the datasetWe will start by loading the `digits` dataset. This dataset containshandwritten digits from 0 to 9. In the context of clustering, one would liketo group images such that the handwritten digits on the image are the same.
###Code
import numpy as np
from sklearn.datasets import load_digits
data, labels = load_digits(return_X_y=True)
(n_samples, n_features), n_digits = data.shape, np.unique(labels).size
print(f"# digits: {n_digits}; # samples: {n_samples}; # features {n_features}")
###Output
# digits: 10; # samples: 1797; # features 64
###Markdown
Define our evaluation benchmarkWe will first our evaluation benchmark. During this benchmark, we intend tocompare different initialization methods for KMeans. Our benchmark will:* create a pipeline which will scale the data using a :class:`~sklearn.preprocessing.StandardScaler`;* train and time the pipeline fitting;* measure the performance of the clustering obtained via different metrics.
###Code
from time import time
from sklearn import metrics
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
def bench_k_means(kmeans, name, data, labels):
"""Benchmark to evaluate the KMeans initialization methods.
Parameters
----------
kmeans : KMeans instance
A :class:`~sklearn.cluster.KMeans` instance with the initialization
already set.
name : str
Name given to the strategy. It will be used to show the results in a
table.
data : ndarray of shape (n_samples, n_features)
The data to cluster.
labels : ndarray of shape (n_samples,)
The labels used to compute the clustering metrics which requires some
supervision.
"""
t0 = time()
estimator = make_pipeline(StandardScaler(), kmeans).fit(data)
fit_time = time() - t0
results = [name, fit_time, estimator[-1].inertia_]
# Define the metrics which require only the true labels and estimator
# labels
clustering_metrics = [
metrics.homogeneity_score,
metrics.completeness_score,
metrics.v_measure_score,
metrics.adjusted_rand_score,
metrics.adjusted_mutual_info_score,
]
results += [m(labels, estimator[-1].labels_) for m in clustering_metrics]
# The silhouette score requires the full dataset
results += [
metrics.silhouette_score(
data,
estimator[-1].labels_,
metric="euclidean",
sample_size=300,
)
]
# Show the results
formatter_result = (
"{:9s}\t{:.3f}s\t{:.0f}\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}\t{:.3f}"
)
print(formatter_result.format(*results))
###Output
_____no_output_____
###Markdown
Run the benchmarkWe will compare three approaches:* an initialization using `kmeans++`. This method is stochastic and we will run the initialization 4 times;* a random initialization. This method is stochastic as well and we will run the initialization 4 times;* an initialization based on a :class:`~sklearn.decomposition.PCA` projection. Indeed, we will use the components of the :class:`~sklearn.decomposition.PCA` to initialize KMeans. This method is deterministic and a single initialization suffice.
###Code
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
print(82 * "_")
print("init\t\ttime\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette")
kmeans = KMeans(init="k-means++", n_clusters=n_digits, n_init=4, random_state=0)
bench_k_means(kmeans=kmeans, name="k-means++", data=data, labels=labels)
kmeans = KMeans(init="random", n_clusters=n_digits, n_init=4, random_state=0)
bench_k_means(kmeans=kmeans, name="random", data=data, labels=labels)
pca = PCA(n_components=n_digits).fit(data)
kmeans = KMeans(init=pca.components_, n_clusters=n_digits, n_init=1)
bench_k_means(kmeans=kmeans, name="PCA-based", data=data, labels=labels)
print(82 * "_")
###Output
__________________________________________________________________________________
init time inertia homo compl v-meas ARI AMI silhouette
k-means++ 0.914s 69662 0.680 0.719 0.699 0.570 0.695 0.163
random 0.324s 69707 0.675 0.716 0.694 0.560 0.691 0.174
PCA-based 0.187s 72686 0.636 0.658 0.647 0.521 0.643 0.154
__________________________________________________________________________________
###Markdown
Visualize the results on PCA-reduced data:class:`~sklearn.decomposition.PCA` allows to project the data from theoriginal 64-dimensional space into a lower dimensional space. Subsequently,we can use :class:`~sklearn.decomposition.PCA` to project into a2-dimensional space and plot the data and the clusters in this new space.
###Code
import matplotlib.pyplot as plt
reduced_data = PCA(n_components=2).fit_transform(data)
kmeans = KMeans(init="k-means++", n_clusters=n_digits, n_init=4)
kmeans.fit(reduced_data)
# Step size of the mesh. Decrease to increase the quality of the VQ.
h = 0.02 # point in the mesh [x_min, x_max]x[y_min, y_max].
# Plot the decision boundary. For that, we will assign a color to each
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(
Z,
interpolation="nearest",
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect="auto",
origin="lower",
)
plt.plot(reduced_data[:, 0], reduced_data[:, 1], "k.", markersize=2)
# Plot the centroids as a white X
centroids = kmeans.cluster_centers_
plt.scatter(
centroids[:, 0],
centroids[:, 1],
marker="x",
s=169,
linewidths=3,
color="w",
zorder=10,
)
plt.title(
"K-means clustering on the digits dataset (PCA-reduced data)\n"
"Centroids are marked with white cross"
)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
###Output
_____no_output_____ |
save_ray_clone_pairs_trading_03202022_v19_TensorTrade.ipynb | ###Markdown
###Code
!pip install tensortrade
!pip install ray[default,rllib,tune]==1.8.0
!pip install yfinance==0.1.64
!pip install pandas-ta==0.3.14b --pre
import yfinance
import pandas_ta #noqa
TRAIN_START_DATE = '2013-01-01' # TODO: replace this with your own start date
TRAIN_END_DATE = '2020-12-31' # TODO: replace this with your own end date
EVAL_START_DATE = '2021-03-17' # TODO: replace this with your own end date
EVAL_END_DATE = '2022-03-17' # TODO: replace this with your own end date
yf_ticker_AAPL = yfinance.Ticker(ticker='AAPL')
yf_ticker_TXT = yfinance.Ticker(ticker='TXT')
df_training_AAPL = yf_ticker_AAPL.history(start=TRAIN_START_DATE, end=TRAIN_END_DATE, interval='1d')
df_training_AAPL.drop(['Dividends', 'Stock Splits'], axis=1, inplace=True)
df_training_AAPL["Volume"] = df_training_AAPL["Volume"].astype(int)
df_evaluation_AAPL = yf_ticker_AAPL.history(start=EVAL_START_DATE, end=EVAL_END_DATE, interval='1d')
df_evaluation_AAPL.drop(['Dividends', 'Stock Splits'], axis=1, inplace=True)
df_evaluation_AAPL["Volume"] = df_evaluation_AAPL["Volume"].astype(int)
df_training_TXT = yf_ticker_TXT.history(start=TRAIN_START_DATE, end=TRAIN_END_DATE, interval='1d')
df_training_TXT.drop(['Dividends', 'Stock Splits'], axis=1, inplace=True)
df_training_TXT["Volume"] = df_training_TXT["Volume"].astype(int)
df_evaluation_TXT = yf_ticker_TXT.history(start=EVAL_START_DATE, end=EVAL_END_DATE, interval='1d')
df_evaluation_TXT.drop(['Dividends', 'Stock Splits'], axis=1, inplace=True)
df_evaluation_TXT["Volume"] = df_evaluation_TXT["Volume"].astype(int)
from tensortrade.oms.instruments import Instrument
USD = Instrument("USD", 2, "U.S. Dollar")
TTC = Instrument("TTC", 8, "TensorTrade Coin")
#params negative multiplier
import numpy as np
import pandas as pd
import statsmodels.api as sm
merge_training=pd.merge(df_training_AAPL, df_training_TXT, how='inner', left_index=True, right_index=True)
merge_training['close_spread'] = merge_training.Close_x - (merge_training.Close_y ) + 100
merge_training['open_spread'] = merge_training.Open_x - (merge_training.Open_y ) + 100
merge_training['high_spread'] = merge_training.High_x - (merge_training.High_y ) + 100
merge_training['low_spread'] = merge_training.Low_x - (merge_training.Low_y ) + 100
merge_training['volume_spread'] = merge_training.Volume_x - (merge_training.Volume_y ) + 100
merge_training.drop(['Open_x', 'High_x', 'Low_x', 'Close_x', 'Volume_x', 'Open_y', 'High_y', 'Low_y', 'Close_y', 'Volume_y'], axis=1, inplace=True)
merge_training.reset_index(level=0, inplace=True)
merge_training.to_csv('training.csv', index=False)
#change names of merge vairables from training to evaluation
import numpy as np
import pandas as pd
import statsmodels.api as sm
merge_training=pd.merge(df_evaluation_AAPL, df_evaluation_TXT, how='inner', left_index=True, right_index=True)
merge_training['close_spread'] = merge_training.Close_x - (merge_training.Close_y ) + 100
merge_training['open_spread'] = merge_training.Open_x - (merge_training.Open_y ) + 100
merge_training['high_spread'] = merge_training.High_x - (merge_training.High_y ) + 100
merge_training['low_spread'] = merge_training.Low_x - (merge_training.Low_y ) + 100
merge_training['volume_spread'] = merge_training.Volume_x - (merge_training.Volume_y ) + 100
merge_training.drop(['Open_x', 'High_x', 'Low_x', 'Close_x', 'Volume_x', 'Open_y', 'High_y', 'Low_y', 'Close_y', 'Volume_y'], axis=1, inplace=True)
merge_training.reset_index(level=0, inplace=True)
merge_training.to_csv('evaluation.csv', index=False)
import ray
from ray import tune
from gym.spaces import Discrete
from tensortrade.env.default.actions import TensorTradeActionScheme
from tensortrade.env.generic import ActionScheme, TradingEnv
from tensortrade.core import Clock
from tensortrade.oms.instruments import ExchangePair
from tensortrade.oms.wallets import Portfolio
from tensortrade.oms.orders import (
Order,
proportion_order,
TradeSide,
TradeType
)
from tensortrade.env.default.rewards import TensorTradeRewardScheme
from tensortrade.feed.core import Stream, DataFeed
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensortrade.env.generic import Renderer
import datetime
from matplotlib.dates import date2num
class PositionChangeChart(Renderer):
def __init__(self, color: str = "orange"):
self.color = "orange"
def render(self, env, **kwargs):
history = pd.DataFrame(env.observer.renderer_history)
actions = list(history.action)
p = list(history.price)
dates = list(pd.to_datetime(history['date']))
buy = {}
sell = {}
for i in range(len(actions) - 1):
a1 = actions[i]
a2 = actions[i + 1]
if a1 != a2:
if a1 == 0 and a2 == 1:
buy[i] = p[i]
else:
sell[i] = p[i]
buy = pd.Series(buy)
sell = pd.Series(sell)
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
fig.suptitle("Performance")
axs[0].plot(history.index, history.price, label="price", color=self.color)
axs[0].scatter(buy.index, buy.values, marker="^", color="green")
axs[0].scatter(sell.index, sell.values, marker="^", color="red")
axs[0].set_title("Trading Chart")
performance_df = pd.DataFrame().from_dict(env.action_scheme.portfolio.performance, orient='index')
performance_df.plot(ax=axs[1])
axs[1].set_title("Net Worth")
plt.show()
from tensortrade.env.default.rewards import TensorTradeRewardScheme
from tensortrade.feed.core import Stream, DataFeed
class PBR(TensorTradeRewardScheme):
registered_name = "pbr"
def __init__(self, price: 'Stream'):
super().__init__()
self.position = -1
r = Stream.sensor(price, lambda p: p.value, dtype="float").diff()
position = Stream.sensor(self, lambda rs: rs.position, dtype="float")
reward = (r * position).fillna(0).rename("reward") ###### <<<<<<<
self.feed = DataFeed([reward])
self.feed.compile()
def on_action(self, action: int):
self.position = -1 if action == 0 else 1
def get_reward(self, portfolio: 'Portfolio'):
return self.feed.next()["reward"]
def reset(self):
self.position = -1
self.feed.reset()
import os
import ray
import numpy as np
import pandas as pd
from ray import tune
from ray.tune.registry import register_env
import tensortrade.env.default as default
from tensortrade.feed.core import DataFeed, Stream
from tensortrade.oms.exchanges import Exchange
from tensortrade.oms.services.execution.simulated import execute_order
from tensortrade.oms.wallets import Wallet, Portfolio
def create_env(config):
dataset = pd.read_csv(filepath_or_buffer=config["csv_filename"], parse_dates=['Date']).fillna(method='backfill').fillna(method='ffill')
p = Stream.source(dataset['close_spread'], dtype="float").rename("USD-TTC")
bitfinex = Exchange("bitfinex", service=execute_order)(p)
cash = Wallet(bitfinex, 100000 * USD)
asset = Wallet(bitfinex, 0 * TTC)
portfolio = Portfolio(USD, [
cash,
asset
])
feed = DataFeed([
p,
p.rolling(window=10).mean().rename("fast"),
p.rolling(window=50).mean().rename("medium"),
p.rolling(window=100).mean().rename("slow"),
p.log().diff().fillna(0).rename("lr")
])
reward_scheme = PBR(price=p)
action_scheme = default.actions.BSH(cash=cash, asset=asset)
renderer_feed = DataFeed([
Stream.source(list(dataset["Date"])).rename("date"),
Stream.source(list(dataset["open_spread"]), dtype="float").rename("open"),
Stream.source(list(dataset["high_spread"]), dtype="float").rename("high"),
Stream.source(list(dataset["low_spread"]), dtype="float").rename("low"),
Stream.source(list(dataset["close_spread"]), dtype="float").rename("close"),
Stream.source(list(dataset["volume_spread"]), dtype="float").rename("volume")
])
features = []
for c in dataset.columns[1:]:
s = Stream.source(list(dataset[c]), dtype="float").rename(dataset[c].name)
features += [s]
feed = DataFeed(features)
feed.compile()
environment = default.create(
feed=feed,
portfolio=portfolio,
action_scheme=action_scheme,
reward_scheme=reward_scheme,
renderer_feed=renderer_feed,
renderer=PositionChangeChart(),
window_size=config["window_size"],
max_allowed_loss=0.6
)
return environment
ray.shutdown()
import ray
import os
from ray import tune
from ray.tune.registry import register_env
# Let's define some tuning parameters
FC_SIZE = 1024 #tune.grid_search([[256, 256], [1024]]) # Those are the alternatives that ray.tune will try...
LEARNING_RATE = 0.0001 #tune.grid_search([0.0001, 0.0005]) # ... and they will be combined with these ones ...
MINIBATCH_SIZE = 32 #tune.grid_search([5, 10]) # ... and these ones, in a cartesian product.
# Get the current working directory
cwd = os.getcwd()
# Initialize Ray
ray.init(num_gpus=1) # There are *LOTS* of initialization parameters, like specifying the maximum number of CPUs\GPUs to allocate. For now just leave it alone.
# Register our environment, specifying which is the environment creation function
register_env("MyTrainingEnv", create_env)
# Specific configuration keys that will be used during training
env_config_training = {
"window_size": 14, # We want to look at the last 14 samples (hours)
"reward_window_size": 7, # And calculate reward based on the actions taken in the next 7 hours
"max_allowed_loss": 0.10, # If it goes past 10% loss during the iteration, we don't want to waste time on a "loser".
"csv_filename": os.path.join(cwd, 'training.csv'), # The variable that will be used to differentiate training and validation datasets
}
# Specific configuration keys that will be used during evaluation (only the overridden ones)
env_config_evaluation = {
"max_allowed_loss": 1.00, # During validation runs we want to see how bad it would go. Even up to 100% loss.
"csv_filename": os.path.join(cwd, 'evaluation.csv'), # The variable that will be used to differentiate training and validation datasets
}
analysis = tune.run(
run_or_experiment="PPO", # We'll be using the builtin PPO agent in RLLib
name="MyExperiment1",
metric='episode_reward_mean',
mode='max',
stop={
"training_iteration": 1 # Let's do 5 steps for each hyperparameter combination
},
config={
"env": "MyTrainingEnv",
"env_config": env_config_training, # The dictionary we built before
"log_level": "DEBUG",
"framework": "torch",
"ignore_worker_failures": True,
"num_workers": 1, # One worker per agent. You can increase this but it will run fewer parallel trainings.
"num_envs_per_worker": 1,
"num_gpus": 1, # I yet have to understand if using a GPU is worth it, for our purposes, but I think it's not. This way you can train on a non-gpu enabled system.
"clip_rewards": True,
#"lr": LEARNING_RATE, # Hyperparameter grid search defined above
"gamma": 0.50, # This can have a big impact on the result and needs to be properly tuned (range is 0 to 1)
"observation_filter": "MeanStdFilter",
"model": {
#"fcnet_hiddens": FC_SIZE, # Hyperparameter grid search defined above
},
#"sgd_minibatch_size": MINIBATCH_SIZE, # Hyperparameter grid search defined above
"evaluation_interval": 1, # Run evaluation on every iteration
"evaluation_config": {
"env_config": env_config_evaluation, # The dictionary we built before (only the overriding keys to use in evaluation)
"explore": False, # We don't want to explore during evaluation. All actions have to be repeatable.
},
},
num_samples=1, # Have one sample for each hyperparameter combination. You can have more to average out randomness.
keep_checkpoints_num=1, # Keep the last 2 checkpoints
checkpoint_freq=1, # Do a checkpoint on each iteration (slower but you can pick more finely the checkpoint to use later)
)
import ray.rllib.agents.ppo as ppo
# Get checkpoint
checkpoints = analysis.get_trial_checkpoints_paths(
trial=analysis.get_best_trial("episode_reward_mean", mode="max"),
metric="episode_reward_mean"
)
checkpoint_path = checkpoints[0][0]
agent = ppo.PPOTrainer(
env = "MyTrainingEnv",
config={
"env_config": env_config_training,
"framework": "torch",
"log_level": "DEBUG",
"ignore_worker_failures": True,
"num_workers": 1,
"num_gpus": 1,
"clip_rewards": True,
"lr": 0.0001,
}
)
agent.restore(checkpoint_path)
# Specific configuration keys that will be used during training
env_config_training = {
"window_size": 14, # We want to look at the last 14 samples (hours)
"reward_window_size": 7, # And calculate reward based on the actions taken in the next 7 hours
"max_allowed_loss": 0.10, # If it goes past 10% loss during the iteration, we don't want to waste time on a "loser".
"csv_filename": os.path.join(cwd, 'training.csv'), # The variable that will be used to differentiate training and validation datasets
}
# Specific configuration keys that will be used during evaluation (only the overridden ones)
env_config_evaluation = {
"max_allowed_loss": 1.00, # During validation runs we want to see how bad it would go. Even up to 100% loss.
"csv_filename": os.path.join(cwd, 'evaluation.csv'), # The variable that will be used to differentiate training and validation datasets
}
def create_eval_env(config):
dataset = pd.read_csv(filepath_or_buffer=config["csv_filename"], parse_dates=['Date']).fillna(method='backfill').fillna(method='ffill')
p = Stream.source(dataset['close_spread'], dtype="float").rename("USD-TTC")
bitfinex = Exchange("bitfinex", service=execute_order)(
p
)
cash = Wallet(bitfinex, 100000 * USD)
asset = Wallet(bitfinex, 0 * TTC)
portfolio = Portfolio(USD, [
cash,
asset
])
feed = DataFeed([
p,
p.rolling(window=10).mean().rename("fast"),
p.rolling(window=50).mean().rename("medium"),
p.rolling(window=100).mean().rename("slow"),
p.log().diff().fillna(0).rename("lr")
])
reward_scheme = PBR(price=p)
action_scheme = default.actions.BSH(cash=cash, asset=asset)
renderer_feed = DataFeed([
Stream.source(list(dataset["Date"])).rename("date"),
Stream.source(list(dataset["open_spread"]), dtype="float").rename("open"),
Stream.source(list(dataset["high_spread"]), dtype="float").rename("high"),
Stream.source(list(dataset["low_spread"]), dtype="float").rename("low"),
Stream.source(list(dataset["close_spread"]), dtype="float").rename("price"),
Stream.source(list(dataset["volume_spread"]), dtype="float").rename("volume"),
Stream.sensor(action_scheme, lambda s: s.action, dtype="float").rename("action"),
])
environment = default.create(
feed=feed,
portfolio=portfolio,
action_scheme=action_scheme,
reward_scheme=reward_scheme,
renderer_feed=renderer_feed,
renderer=PositionChangeChart(),
window_size=config["window_size"],
max_allowed_loss=0.6
)
return environment
# Instantiate the environment
env = create_eval_env({
"window_size": 14,"csv_filename": os.path.join(cwd, 'evaluation.csv'),
})
# Run until episode ends
episode_reward = 0
done = False
obs = env.reset()
while not done:
action = agent.compute_action(obs)
obs, reward, done, info = env.step(action)
episode_reward += reward
env.render()
import pandas as pd
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
history.head(100)
###Output
_____no_output_____ |
torch_tester.ipynb | ###Markdown
Run forward function
###Code
from threading import Thread
import time
import psutil
import subprocess
import pandas as pd
class GpuReader(Thread):
def __init__(self):
self.process = subprocess.Popen(['tegrastats'], stdout=subprocess.PIPE)
self.stopped = True
self.values = {}
super().__init__()
def __enter__(self):
self.start()
return self
def __exit__(self, exception_type, exception_value, traceback):
self.stop()
def start(self):
self.stopped = False
super().start()
def stop(self):
self.stopped = True
def run(self):
while not self.stopped:
resp = self.process.stdout.readline().strip().decode('utf-8')
resp_array = resp.split(' ')
idx = resp_array.index('GR3D_FREQ')
self.values['GR3D_FRWQ'] = resp_array[idx + 1]
class CpuGpuTracker(Thread):
def __init__(self, Ts):
self.Ts = Ts
self.stopped = True
self.start_time = None
self.values = []
super().__init__()
def __enter__(self):
self.start()
return self
def __exit__(self, exception_type, exception_value, traceback):
# print(exception_type, exception_value, traceback)
self.stop()
def start(self):
self.stopped = False
self.start_time = time.time()
super().start()
def stop(self):
self.stopped = True
def run(self):
while not self.stopped:
mem = psutil.virtual_memory()
cpu_percent = psutil.cpu_percent()
gpu_percent = 20
print(mem)
print(f'{time.time():.8f}')
self.values.append((time.time() - self.start_time, mem.percent, cpu_percent, gpu_percent))
time.sleep(self.Ts)
def get_values(self):
return pd.DataFrame(self.values, columns=['time', 'memory', 'cpu_percent', 'gpu_percent'])
# import sys
# import os
import torch
import time
def execute_net(net_interface, X_data, Y_data, batch_size, priority, loops=1, device='cuda', metrics=['ex_time', 'cpu_max', 'gpu_max', 'mem_max'], echo=True):
if sys.platform == 'linux':
os.nice(priority)
device = torch.device('cpu')
if device == 'cuda':
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
if echo and not torch.cuda.is_available():
print("'cuda' device is not available. Using 'cpu' instead.")
batches = split_into_batches(X_data, Y_data, batch_size)
batch_exec_times = []
procesor_tracked_values = None
with CpuGpuTracker(0.1) as tracker:
initial_time = time.time()
for loop in range(loops):
if echo:
print('loop:', loop)
batch_count = 0
for X_batch, Y_batch in batches:
start_time = time.time()
Y_pred = net_interface.predict_net(X_batch)
batch_time = time.time() - start_time
if echo:
print(f'batch_time: {batch_time:.8f}')
batch_exec_times.append((loop, batch_count, batch_time, time.time() - initial_time))
batch_count += 1
procesor_tracked_values = tracker.get_values()
batch_exec_times = pd.DataFrame(batch_exec_times, columns=['loop', 'batch_count', 'batch_time', 'time'])
return procesor_tracked_values, batch_exec_times
tracked_values, batch_exec_times = execute_net(net_interface, X_torch, Y_torch, 10, 0)
tracked_values
batch_exec_times
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
from data_worker.data_worker import unpickle, unpack_data, combine_batches, split_into_batches
from torch_lib.data_worker import suit4torch
from torch_lib.Interface import Interface
from torch_lib.Nets import LargeNet, MediumNet, SmallNet
###Output
_____no_output_____
###Markdown
Data
###Code
batches_names = [
'data_batch_1', 'data_batch_2', 'data_batch_3', 'data_batch_4',
'data_batch_5'
]
data_batches = [
unpickle(f'datasets/cifar-10-batches-py/{batch_name}') for batch_name
in batches_names]
unpacked_batches = [
(unpack_data(data_batch)) for data_batch
in data_batches]
print(unpacked_batches[0][0].shape)
X, Y = combine_batches(unpacked_batches)
X_torch, Y_torch = suit4torch(X, Y)
batches = split_into_batches(X_torch, Y_torch, 3)
# torch_batches = [(suit4torch(X, Y)) for X, Y in batches]
X_batch0 = batches[0][0]
Y_batch0 = batches[0][1]
print(X.shape, Y.shape)
###Output
(10000, 32, 32, 3)
(32, 32, 3)
(50000, 32, 32, 3) (50000,)
###Markdown
Train network
###Code
# net = SmallNet()
# net = MediumNet()
net = LargeNet()
net_interface = Interface(net)
net_interface.train_net(batches, 1, verbose=False)
###Output
_____no_output_____
###Markdown
Save weights
###Code
# PATH = 'saved_nets/saved_torch/small_v1.pth'
# PATH = 'saved_nets/saved_torch/medium_v1.pth'
PATH = 'saved_nets/saved_torch/large_v1.pth'
net_interface.save_weights(PATH)
###Output
_____no_output_____
###Markdown
Predict
###Code
preds = net_interface.predict_net(X_batch0)
preds = preds.detach().numpy()
print(preds)
print(np.argmax(preds, axis=1), Y_batch0)
###Output
_____no_output_____
###Markdown
Evaluate Accuracy
###Code
acc, N = net_interface.eval_acc_net(X_torch, Y_torch)
print(acc, N)
###Output
0.1 50000
###Markdown
Load weights
###Code
net_interface.load_weights(PATH)
preds = net_interface.predict_net(X_torch)
preds = preds.detach().numpy()
print(preds)
print(np.argmax(preds, axis=1), Y_torch)
###Output
[[-0.03193782 -0.02609414 -0.09416538 ... -0.04211621 0.0363774
-0.00137505]
[-0.03180882 -0.02602005 -0.09423827 ... -0.04228754 0.0366769
-0.00128849]
[-0.03197443 -0.02589439 -0.09421802 ... -0.04241734 0.03658268
-0.0012246 ]
...
[-0.03207375 -0.02579184 -0.0941498 ... -0.04242862 0.03647955
-0.00118262]
[-0.0313148 -0.02602111 -0.09421885 ... -0.04214914 0.03719144
-0.00123877]
[-0.03213761 -0.0258862 -0.09394593 ... -0.04200218 0.03606745
-0.00131202]]
[8 8 8 ... 8 8 8] tensor([6, 9, 9, ..., 9, 1, 1])
|
assignments/assignment03/ProjectEuler8.ipynb | ###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
"""
# get d1000 readable
d1000_str = "".join(d1000.split("\n"))
# put values into an array
d1000_ar = np.array([i for i in d1000_str], dtype = int)
# compute products of n adjacent values and keep largest
def myprod(n):
test = 0
for i in range(n,1000):
test_cur = np.prod(d1000_ar[(i-n):(i)])
if test_cur > test:
test = test_cur
return(test)
myprod(4)
myprod(13)
assert True # leave this for grading
print(np.prod(np.array([3,2,4])))
print(np.array([3,2,4]).prod())
###Output
24
24
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = "7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450"
# YOUR CODE HERE
d = np.array(range(1000))
for x in range(1000):
d[x] = d1000[x]
print(d)
def mult_thirteen(lst):
prod = np.array(range(1000-12))
for a in range(len(lst)-13):
x=1
for b in range(a,a+13):
x *= lst[b]
prod[a] = x
return prod
products = mult_thirteen(d)
ind = np.argmax(products)
print(d[ind:ind+13])
print(products.max())
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
"""
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = "\
73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450\
"
def strproduct(lst):
return np.cumprod(lst)[-1]
bestprod=0
num = np.empty((len(d1000)-14, 13))
for n in np.arange(len(d1000)-14):
print(n/(len(d1000)-14))
for i in range(13):
num[n,i] = d1000[n+(i+1)]
prod = strproduct(num[n])
if prod > bestprod:
bestprod=prod
print(bestprod)
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000=
"""73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450"""
num = 7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450
# YOUR CODE HERE
#raise NotImplementedError()
#Returns greatest product of 13 adjacent digits in d1000
def max_mult(number):
w=[]
y=[]
z=[]
#creates 1-D array full of 1000 zeros
x = np.zeros(1000)
s=0
for i in str(number):
#replaces every zero in x with corresponding digit in d1000
x[s]= float(i)
#appends arrays, each consisting of 13 adjacent digits, to y
#loop allows for all possibilities
a=x[s:s+13]
y.append(a)
s+=1
#makes new arrays from cumulative sum of each array in y
#appends new arrays to z
for j in y:
b = j.cumprod()
z.append(b)
#takes max of each array in z and appends it to w
for l in z:
w.append(max(l))
#w now consists of the products of all possible 13 adjacent digits in d1000
return (int(max(w)))
max_mult(num)
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
"""
# YOUR CODE HERE
q = []
d = [int(r) for r in d1000.replace('\n','')]
d = np.array(d)
def find_max_product(n):
for i in range(1000-n+1):
k = d[i:i+n]
q.append(k.prod())
return np.max(q)
print(find_max_product(13))
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
"""
numbers=np.array([])
for number in d1000:
c=d1000.replace("\n","")
for x in c:
np.append(numbers,int(x))
numbers=np.array(list(c), dtype=int)
i=0
num=0
big=0
for i in range(1000):
n=13
digits=numbers[i:i+n]
num=np.prod(digits)
if num>big:
big=num
print(big)
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = "7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450"
t=[]
p=0
n=np.zeros(1000) #makes array of 1000 zeros
for i in range(len(d1000)):
n[i]=int(d1000[i]) #loops through list of numbers and makes them integers
for i in range(len(n)-12): #loops through the 13 adjacent numbers and appends the products to list t
t.append(np.prod(np.array(n[i:i+13])))
if t[i]>p: #replaces value of p if the previous value is larger
p=t[i]
print(p)
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
"""
###Output
_____no_output_____
###Markdown
First, I replaced all the \n's in d1000 with a space to have just a string of digits
###Code
d1000_new = d1000.replace('\n', '')
###Output
_____no_output_____
###Markdown
Then, I appended each integer value in d1000_new to a new list
###Code
lst = []
for i in d1000_new:
lst.append(int(i))
###Output
_____no_output_____
###Markdown
Then I made this list into an array
###Code
a = np.array(lst)
###Output
_____no_output_____
###Markdown
I started the main code by defining the maximum product as zero, and then a for loop that looped through all the digits in the list of digits. Using an if statement, I set a bound so no groups of thirteen would attempted to be made where no values would be found(like near the end of the number). I used np.prod to calculate the product of all thirteen digit groups using array slicing. A final if statement goes through the products as they're created, and if they're larger than the maximum the previous maximum product, then that will be the new maximum product. Finally, I print the max product.
###Code
maximum_prod = 0
for n in range(0, len(a)+1):
if n <= (len(a) - 12):
prod = np.prod(a[n:n+13])
if prod > maximum_prod:
maximum_prod = prod
print(maximum)
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
"""
thousand_digits= """7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450
"""
i = 0 # this is my counter
list_of_thirteens=[] # this is my list of all possible 13 adjacent numbers within the 1000
while(i<=987): #This will assure that my code will stop at the very last adjacent 13 numbers
a= thousand_digits[i:i+13]
list_of_thirteens.append(a) # adds the adjacent 13 numbers onto my list
i+=1 # shifts to right by one
products=[]
for i in list_of_thirteens: # for each element in my list, turn into integer, and multiply them together
c=(int(i[0])*int(i[1])*int(i[2])*int(i[3])*int(i[4])*int(i[5])*int(i[6])*int(i[7])*int(i[8])*int(i[9])*int(i[10])*int(i[11])*int(i[12]))
products.append(c) # add products to my list-- products
answer = max(products)
print(answer) # this is my answer
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = '7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450'
# YOUR CODE HERE (Late Assignment)
def great(y): #Changes variable to a
a = d1000 #empty lists to append later values
k = []
j = []
for i in range(len(a)): #Takes string a and converts it to a list of seperate intergers in the same order.
k.append(int(a[i]))
x = np.array(k) #Puts list in array.
for i in range(0, len(x)):
v = np.prod(x[i:i+y]) #Takes product from index i to index i+y, puts it in it own array, and start again at the next index.
j.append(v) #Appends products to empty list
return j
print(max(great(13))) #Finds greatest product in list.
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
"""
def greatest_product(text):
#Change text from str to int list
new = list(text)
for i in range(len(d1000)):
if text[i] == "\n":
new.remove("\n")
i = 0
#Put list into an array
text_array = np.array(new, dtype=np.float)
# Create variables for a 13 digit product
a= 0
b= 13
#Initialize max value
max_value = 0.0
#While loop takes every adjacent 13 digit's product and compares to max_value
while b <= len(text_array):
if b < len(text_array) and text_array[a:b].cumprod()[-1] > max_value:
max_value = text_array[a:b].cumprod()[-1]
a += 1
b += 1
# a and b are incremented together to keep it a 13 digit product
elif b == len(text) and text_array[a:].cumprod()[-1] > max_value:
max_value = text_array[a:].cumprod()[-1]
else:
a += 1
b += 1
return max_value
greatest_product(d1000)
#Used ellisonbg's idea of a for loop to avoid storing all products
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
"""
for d1000[:13]
raise NotImplementedError()
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = "\
73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450\
"
number = 0
v = np.zeros((len(d1000)-13,13)) #has to be on the outside of for loops...other wise it will just reiterate.
for q in range(len(d1000)-13): #list of numbers in d1000
for p in range(13): #loop within loop for range 13... this fills the matrix w/ inception
v[q,p]= int(d1000[q+p]) #Where we actually fill it.
idk = np.cumprod(v[q], dtype = int)[-1] #v[q] is a set of 13... the [-1] means to retrieve the last value of the list.
if idk > number:
number = idk
print(number)
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = 7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450
e1000 = str(d1000)
c = 0
n = -1
x = 1
product = 0
a = np.ones((13),int) #See I used numpy
while c<1000: #The loop goes through the entire sequence
c = c + 1
n = n + 1
#Each value of the array is shifted over 1, and the new value is placed at the 13th spot
a[0] = a[1]
a[1] = a[2]
a[2] = a[3]
a[3] = a[4]
a[4] = a[5]
a[5] = a[6]
a[6] = a[7]
a[7] = a[8]
a[8] = a[9]
a[9] = a[10]
a[10] = a[11]
a[11] = a[12]
a[12] = int(e1000[n])
x = np.cumprod(a) #The product of the array
x = x[12]
if product < x: #Replaces the product if it is larger than the largest so far.
product = x
print (product)
print (x)
#raise NotImplementedError()
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450"""
g = list(d1000)
z=[]
for i in range(len(g)):
if g[i] == 1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 or 9 :
z.append(g[i])
x = np.array((z), dtype = int)
g = list(d1000)
z=[]
for i in range(len(g)):
if g[i] == 1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 or 9 :
z.append(g[i])
d = []
for i in range(len(z)):
if z[i] == '1':
d.append(1.0)
elif z[i] == '2':
d.append(2.0)
elif z[i] == '3':
d.append(3.0)
elif z[i] == '4':
d.append(4.0)
elif z[i] == '5':
d.append(5.0)
elif z[i] == '6':
d.append(6.0)
elif z[i] == '7':
d.append(7.0)
elif z[i] == '8':
d.append(8.0)
elif z[i] == '9':
d.append(9.0)
else:
d.append(0.0)
x = np.array(d)
print(x)
g = list(d1000)
z=[]
for i in range(len(g)):
if g[i] == 1 or 2 or 3 or 4 or 5 or 6 or 7 or 8 or 9 :
z.append(g[i])
d = []
for i in range(len(z)):
if z[i] == '1':
d.append(1.0)
elif z[i] == '2':
d.append(2.0)
elif z[i] == '3':
d.append(3.0)
elif z[i] == '4':
d.append(4.0)
elif z[i] == '5':
d.append(5.0)
elif z[i] == '6':
d.append(6.0)
elif z[i] == '7':
d.append(7.0)
elif z[i] == '8':
d.append(8.0)
elif z[i] == '9':
d.append(9.0)
else:
d.append(0.0)
x = np.array(d)
products_13digit = []
for i in range(len(x)-13):
products_13digit.append(x[i:i + 13].prod())
y = np.array(products_13digit).max()
print(y)
"""the printed number y is the answer"""
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
"""
arr1000=np.array([int(n) for n in d1000 if n!='\n']) #converts each element of the list into an integer and excludes "\n"
prod_list=[] #then puts into 1D array
for i in range(0,1000-13):
prod=arr1000[i:i+13].prod() #takes the product of every consecutive 13 digits and appends them to list
prod_list.append(prod)
arrprod=np.array(prod_list).max() #puts list into array and then finds max of array
print(arrprod)
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450"""
a = list(d1000.split('\n'))
b = ''
for item in a:
b = b + item
c = list(b)
d = []
for item in c:
e = int(item)
d.append(e)
e = np.array(d)
f = np.array(list(range(0,988)))
g = []
for item in f:
h = np.array(e[0 + item: 13 + item])
i = h.cumprod()
g.append(i[-1])
j = np.array(g)
j.max()
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
"""
d=[]
num=['0','1','2','3','4','5','6','7','8','9']
for i in range(len(d1000)):
if (d1000[i]) in num:
d.append(int(d1000[i])) #turns d1000 into list without \n
D=np.array(d) #turns list into array
P=np.zeros((np.size(D)-12,1)) #create array
for i in range(np.size(D)-12):
P[i,0]=max(np.cumprod(D[i:i+13:1])) #insert product of 13 consecutive #s in P
print(max(P))
raise NotImplementedError()
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
"""
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450"""
maxprod = 0
for i in range(len(d1000)-12):
l = []
for j in range(13):
l.append(int(d1000[i+j]))
r = np.array(l)
p = r.cumprod()[len(l)-1]
if p > maxprod:
maxprod = p
maxprod
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = 73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450\
print(d1000)
def create_digit_array(n):
#turns integer into list of digits
array = np.empty((1, len(str(n))), dtype=int)
for d in range(len(str(n))):
array[0,d] = str(n)[d]
return array
# YOUR CODE HERE
def prod13(array):
prodarray = np.empty((1, array.size - 13))
#takes adjacent 13 digits in string, multiplies them and puts into new array
for n in range(0, array.size - 13):
digarray = array[0, n:n+13]
prodarray[0, n] = np.cumprod(digarray)[-1]
#find the max of the product array and the position
max = np.amax(prodarray)
digits = np.array_str(array[0, np.argmax(array):np.argmax(array) + 13])
return digits , max
print(prod13(create_digit_array(d1000)))
assert True # leave this for grading
###Output
_____no_output_____
###Markdown
Project Euler: Problem 8 https://projecteuler.net/problem=8The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.(see the number below)Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?Use NumPy for this computation
###Code
import numpy as np
d1000 = """
73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450
"""
###Output
_____no_output_____
###Markdown
Turn d1000 into a list I can work with
###Code
b = []
for digit in d1000.strip():
b.append(int(digit))
b
###Output
_____no_output_____
###Markdown
create a function that finds the max product of 13 adjacent digits
###Code
c = np.array(b)
def product(n):
themax = 0
numbers = []
for i in range(1000):
d = n[i:i+13] #this is a list of the 13 adjacent digits
p = np.cumprod(d)[len(np.cumprod(d)) - 1] #this is the product of that list
if p > themax: #p and d will continue to be replaced by the largest that the for loop finds
themax = p
numbers = d
return themax, numbers
print(product(c))
###Output
(23514624000, array([5, 5, 7, 6, 6, 8, 9, 6, 6, 4, 8, 9, 5]))
###Markdown
Success! Worked on this solution with Ryan Werth during pair programming in class.
###Code
assert True # leave this for grading
###Output
_____no_output_____ |
notebooks/06_Multiple_Hidden_Layers_NN-Restoring_Model.ipynb | ###Markdown
Restoring a multiple hidden layer Neural Network The progress of the model can be saved during and after training. This means that a model can be resumed where it left off and avoid long training times. Saving also means that you can share your model and others can recreate your work.We will illustrate how to restore a multiple fully connected hidden layer NN. This NN have been saved in another place. We will restore the model from the "model_path" and make predictions with this so trained model after reload it.We will use the "Concrete Compressive Strength Data Set" from:https://archive.ics.uci.edu/ml/datasets/Concrete+Compressive+StrengthWe will build a three-hidden layer neural network to predict the fourth attribute, Petal Width from the other three (Sepal length, Sepal width, Petal length). Load configuration
###Code
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from sklearn.datasets import load_iris
from tensorflow.python.framework import ops
import pandas as pd
###Output
/home/parrondo/anaconda3/envs/deeptrading/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
/home/parrondo/anaconda3/envs/deeptrading/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
/home/parrondo/anaconda3/envs/deeptrading/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
/home/parrondo/anaconda3/envs/deeptrading/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
###Markdown
Ingest raw data
###Code
#
# Leave in blanck intentionally
#
###Output
_____no_output_____
###Markdown
Basic pre-process data
###Code
#
# Leave in blanck intentionally
#
###Output
_____no_output_____
###Markdown
Split data
###Code
#
# Leave in blanck intentionally
#
###Output
_____no_output_____
###Markdown
Transform features
###Code
#
# Leave in blanck intentionally
#
# If you have used data transformation becareful to ingest the transformed data to the model before
# making predictions and back-transform the predicitions. See notebook 05.
#
###Output
_____no_output_____
###Markdown
Implement the model
###Code
# Clears the default graph stack and resets the global default graph
ops.reset_default_graph()
# make results reproducible
seed = 2
tf.set_random_seed(seed)
np.random.seed(seed)
# Parameters
learning_rate = 0.005
batch_size = 50
n_features = 8# Number of features in training data
epochs = 10000
display_step = 100
model_path = "../model/tmp/model.ckpt"
n_classes = 1
# Network Parameters
# See figure of the model
d0 = D = n_features # Layer 0 (Input layer number of features)
d1 = 5 # Layer 1 (5 hidden nodes)
d2 = 15 # Layer 2 (15 hidden nodes)
d3 = 5 # Layer 3 (5 hidden nodes)
d4 = C = 1 # Layer 4 (Output layer)
# tf Graph input
print("Placeholders")
X = tf.placeholder(dtype=tf.float32, shape=[None, n_features], name="X")
y = tf.placeholder(dtype=tf.float32, shape=[None,n_classes], name="y")
# Initializers
print("Initializers")
sigma = 1
weight_initializer = tf.variance_scaling_initializer(mode="fan_avg", distribution="uniform", scale=sigma)
bias_initializer = tf.zeros_initializer()
# Create model
def multilayer_perceptron(X, variables):
# Hidden layer with ReLU activation
layer_1 = tf.nn.relu(tf.add(tf.matmul(X, variables['W1']), variables['bias1']))
# Hidden layer with ReLU activation
layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, variables['W2']), variables['bias2']))
# Hidden layer with ReLU activation
layer_3 = tf.nn.relu(tf.add(tf.matmul(layer_2, variables['W3']), variables['bias3']))
# Output layer with ReLU activation
out_layer = tf.nn.relu(tf.add(tf.matmul(layer_3, variables['W4']), variables['bias4']))
return out_layer
# Store layers weight & bias
variables = {
'W1': tf.Variable(weight_initializer([n_features, d1]), name="W1"), # inputs -> d1 hidden neurons
'bias1': tf.Variable(bias_initializer([d1]), name="bias1"), # one biases for each d1 hidden neurons
'W2': tf.Variable(weight_initializer([d1, d2]), name="W2"), # d1 hidden inputs -> d2 hidden neurons
'bias2': tf.Variable(bias_initializer([d2]), name="bias2"), # one biases for each d2 hidden neurons
'W3': tf.Variable(weight_initializer([d2, d3]), name="W3"), ## d2 hidden inputs -> d3 hidden neurons
'bias3': tf.Variable(bias_initializer([d3]), name="bias3"), # one biases for each d3 hidden neurons
'W4': tf.Variable(weight_initializer([d3, d4]), name="W4"), # d3 hidden inputs -> 1 output
'bias4': tf.Variable(bias_initializer([d4]), name="bias4") # 1 bias for the output
}
# Construct model
y_hat = multilayer_perceptron(X, variables)
# Define loss and optimizer
loss = tf.reduce_mean(tf.square(y - y_hat)) # MSE
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # Train step
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# 'Saver' op to save and restore all the variables
saver = tf.train.Saver()
###Output
Placeholders
Initializers
###Markdown
Train the model and Evaluate the model
###Code
#
# Leave in blanck intentionally
#
###Output
_____no_output_____
###Markdown
Saving a Tensorflow modelSo, now we have our model saved.Tensorflow model has four main files:* a) Meta graph:This is a protocol buffer which saves the complete Tensorflow graph; i.e. all variables, operations, collections etc. This file has .meta extension.* b) y c) Checkpoint files:It is a binary file which contains all the values of the weights, biases, gradients and all the other variables saved. Tensorflow has changed from version 0.11. Instead of a single .ckpt file, we have now two files: .index and .data file that contains our training variables. * d) Along with this, Tensorflow also has a file named checkpoint which simply keeps a record of latest checkpoint files saved. Predict Finally, we can use the model to make some predictions. First we transform our samples accordingly.
###Code
#
# You need notebook 05 to make this transform there.
#sc.transform([[203.5, 305.3, 0.0, 203.5, 0.0, 963.4, 630.0, 90],
# [173.0, 116.0, 0.0, 192.0, 0.0, 946.8, 856.8, 90],
# [522.0, 0.0, 0.0, 146.0, 0.0, 896.0, 896.0, 7]]) #True value 51.86, 32.10, 50.51
###Output
_____no_output_____
###Markdown
Make the predictions
###Code
# Running a new session for predictions
print("Starting prediction session...")
with tf.Session() as sess:
# Initialize variables
sess.run(init)
# Restore model weights from previously saved model
saver.restore(sess, model_path)
print("Model restored from file: %s" % model_path)
# We try to predict the Concrete compressive strength (MPa megapascals) of three samples
feed_dict_std = {X: [[ 9.78564946, 15.74026644, -2.11773518, 9.78564946, -2.11773518,
54.23470101, 34.73303791, 3.14666097],
[ 8.00160409, 4.66748653, -2.11773518, 9.11297662, -2.11773518,
53.26371238, 47.99931622, 3.14666097],
[28.41576252, -2.11773518, -2.11773518, 6.42228525, -2.11773518,
50.29225322, 50.29225322, -1.70828215]]}
prediction = sess.run(y_hat, feed_dict_std)
print(prediction) #True value 51.86, 32.10, 50.51
#
# Revert transformation. Again you need notebook 05.
#y_hat_rev = sc.inverse_transform(prediction)
#y_hat_rev
#
###Output
_____no_output_____
###Markdown
Restoring a multiple hidden layer Neural Network The progress of the model can be saved during and after training. This means that a model can be resumed where it left off and avoid long training times. Saving also means that you can share your model and others can recreate your work.We will illustrate how to restore a multiple fully connected hidden layer NN. This NN have been saved in another place. We will restore the model from the "model_path" and make predictions with this so trained model after reload it.We will use the iris data for this exercise.We will build a three-hidden layer neural network to predict the fourth attribute, Petal Width from the other three (Sepal length, Sepal width, Petal length). Load configuration
###Code
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from sklearn.datasets import load_iris
from tensorflow.python.framework import ops
import pandas as pd
###Output
/home/parrondo/anaconda3/envs/deeptrading/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
/home/parrondo/anaconda3/envs/deeptrading/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
/home/parrondo/anaconda3/envs/deeptrading/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
/home/parrondo/anaconda3/envs/deeptrading/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
###Markdown
Ingest raw data
###Code
#
# Leave in blanck intentionally
#
###Output
_____no_output_____
###Markdown
Basic pre-process data
###Code
#
# Leave in blanck intentionally
#
###Output
_____no_output_____
###Markdown
Split data
###Code
#
# Leave in blanck intentionally
#
###Output
_____no_output_____
###Markdown
Transform features
###Code
#
# Leave in blanck intentionally
#
###Output
_____no_output_____
###Markdown
Implement the model
###Code
# Clears the default graph stack and resets the global default graph
ops.reset_default_graph()
# make results reproducible
seed = 2
tf.set_random_seed(seed)
np.random.seed(seed)
# Parameters
learning_rate = 0.005
batch_size = 50
n_features = 3 # Number of features in training data
epochs = 1000*10
display_step = 50
model_path = "/tmp/model.ckpt"
n_classes = 1
# Network Parameters
# See figure of the model
d0 = D = n_features # Layer 0 (Input layer number of features)
d1 = 64 # Layer 1 (50 hidden nodes)
d2 = 32 # Layer 2 (25 hidden nodes)
d3 = 8 # Layer 3 (5 hidden nodes)
d4 = C = 1 # Layer 4 (Output layer)
# tf Graph input
print("Placeholders")
X = tf.placeholder(dtype=tf.float32, shape=[None, n_features], name="X")
y = tf.placeholder(dtype=tf.float32, shape=[None,n_classes], name="y")
# Initializers
print("Initializers")
sigma = 1
weight_initializer = tf.variance_scaling_initializer(mode="fan_avg", distribution="uniform", scale=sigma)
bias_initializer = tf.zeros_initializer()
# Create model
def multilayer_perceptron(X, variables):
# Hidden layer with ReLU activation
layer_1 = tf.nn.relu(tf.add(tf.matmul(X, variables['W1']), variables['bias1']))
# Hidden layer with ReLU activation
layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, variables['W2']), variables['bias2']))
# Hidden layer with ReLU activation
layer_3 = tf.nn.relu(tf.add(tf.matmul(layer_2, variables['W3']), variables['bias3']))
# Output layer with ReLU activation
out_layer = tf.nn.relu(tf.add(tf.matmul(layer_3, variables['W4']), variables['bias4']))
return out_layer
# Store layers weight & bias
variables = {
'W1': tf.Variable(weight_initializer([n_features, d1]), name="W1"), # inputs -> d1 hidden neurons
'bias1': tf.Variable(bias_initializer([d1]), name="bias1"), # one biases for each d1 hidden neurons
'W2': tf.Variable(weight_initializer([d1, d2]), name="W2"), # d1 hidden inputs -> d2 hidden neurons
'bias2': tf.Variable(bias_initializer([d2]), name="bias2"), # one biases for each d2 hidden neurons
'W3': tf.Variable(weight_initializer([d2, d3]), name="W3"), ## d2 hidden inputs -> d3 hidden neurons
'bias3': tf.Variable(bias_initializer([d3]), name="bias3"), # one biases for each d3 hidden neurons
'W4': tf.Variable(weight_initializer([d3, d4]), name="W4"), # d3 hidden inputs -> 1 output
'bias4': tf.Variable(bias_initializer([d4]), name="bias4") # 1 bias for the output
}
# Construct model
y_hat = multilayer_perceptron(X, variables)
# Define loss and optimizer
loss = tf.reduce_mean(tf.square(y - y_hat)) # MSE
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) # Train step
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# 'Saver' op to save and restore all the variables
saver = tf.train.Saver()
###Output
Placeholders
Initializers
###Markdown
Train the model and Evaluate the model
###Code
#
# Leave in blanck intentionally
#
###Output
_____no_output_____
###Markdown
Saving a Tensorflow modelSo, now we have our model saved.Tensorflow model has four main files:* a) Meta graph:This is a protocol buffer which saves the complete Tensorflow graph; i.e. all variables, operations, collections etc. This file has .meta extension.* b) y c) Checkpoint files:It is a binary file which contains all the values of the weights, biases, gradients and all the other variables saved. Tensorflow has changed from version 0.11. Instead of a single .ckpt file, we have now two files: .index and .data file that contains our training variables. * d) Along with this, Tensorflow also has a file named checkpoint which simply keeps a record of latest checkpoint files saved. Predict Finally, we can use the model to make some predictions.
###Code
# Running a new session for predictions
print("Starting prediction session...")
with tf.Session() as sess:
# Initialize variables
sess.run(init)
# Restore model weights from previously saved model
saver.restore(sess, model_path)
print("Model restored from file: %s" % model_path)
# We try to predict the petal width (cm) of three samples
feed_dict = {X: [[5.1, 3.5, 1.4],
[4.8, 3.0, 1.4],
[6.3, 3.4, 5.6]]
}
prediction = sess.run(y_hat, feed_dict)
print(prediction) # True value 0.2, 0.1, 2.4
###Output
Starting prediction session...
INFO:tensorflow:Restoring parameters from /tmp/model.ckpt
Model restored from file: /tmp/model.ckpt
[[0.23460037]
[0.19889006]
[2.0588052 ]]
|
.ipynb_checkpoints/best-neighborhood-checkpoint.ipynb | ###Markdown
Data Set 1--- Number of Parks in the City of Pittsburgh
###Code
import pandas as pd
import numpy as np
import geopandas
%matplotlib inline
import matplotlib.pyplot as plt
# Importing Data
parksData = pd.read_csv("parks-pittsburgh.csv")
parksData.head(10)
typeOf = parksData['type'].value_counts()
typeOf.head(10)
###Output
_____no_output_____
###Markdown
Graphing amount of Parks per Neighborhood
###Code
# Get data related to arrests within certain neighborhoods
rateOfPark = parksData['neighborhood'].value_counts()
rateOfPark.plot(kind="bar", ylim=[0,14], figsize=(20,20))
plt.title("Amount of Parks per Neighborhood")
plt.xlabel("Neighborhood")
plt.ylabel("Amount of Parks")
neighborhoods = geopandas.read_file("Neighborhoods/Neighborhoods_.shp")
###Output
_____no_output_____
###Markdown
Analysis The amount of parks per neighborhood is an indicator of how good a neighborhood is. So, the more parks there are in any neighborhood, the quality of that neighborhood would then be higher. As seen as the graph above, East Liberty has 12 Parks, making it (according to this very specific standard), the best neighborhood in Pittsburgh. Data Set 2--- Police Arrests in the City of Pittsburgh
###Code
# Import data
police_arrests = pd.read_csv("police-arrest.csv")
police_arrests.head(10)
# Get data related to arrests within certain neighborhoods, then plot that data
rates = police_arrests['INCIDENTNEIGHBORHOOD'].value_counts()
rates.tail(20).plot(kind="bar", ylim=[0,120])
plt.title("Lowest Crimes per Neighborhoods")
plt.xlabel("Neighborhood")
plt.ylabel("Amount")
neighborhoods = geopandas.read_file("Neighborhoods/Neighborhoods_.shp")
###Output
_____no_output_____
###Markdown
Graphical DataGoing simply off of this graph, the two neighborhoods with the lowest amount of arrests would be Troy Hill-Herrs Island and Mt. Oliver Neighborhood. Although, the number of arrests within a neighborhood is a very specific metric. There are other things that will determine what should be considered as the best Pittsburgh neighborhood to live in. To start looking at this, we will broaden our data. Instead of looking at the number of arrests, we will look at a different data set which talks about general crime rate for major crimes.
###Code
# Read in new data set, print out first 10 instances
major_arrests = pd.read_csv("arrests-for-major-crimes-1972.csv")
major_arrests.head(10)
# Here, we are assigning each major type of crime to its own Pandas Series. These series will contain a number representing
# a specific neighborhood (alphabetical order cooreseponds to numerical order) as well as a value for how many arrests there
# were for that particular type of crime. From there, this data will be plotted onto the same type so we can see which type
# of crime is most common in which neighborhood.
arrests_murder = major_arrests['number_arrests_murder']
arrests_rape = major_arrests['number_arrests_rape']
arrests_robbery = major_arrests['number_arrests_robbery']
arrests_assault = major_arrests['number_arrests_assault']
arrests_burglary = major_arrests['number_arrests_burglary']
arrests_larceny = major_arrests['number_arrests_larceny']
arrests_total = pd.concat([arrests_murder, arrests_rape, arrests_robbery, arrests_assault, arrests_burglary, arrests_larceny], axis=1)
arrests_total.plot(figsize=(10, 10))
plt.title("Major Crimes per Neighborhood")
plt.xlabel("Neighborhood (Represented by Number)")
plt.ylabel("Amount")
###Output
_____no_output_____
###Markdown
From this graph, we can see that typically larceny, or theft of personal property, is much higher than the other major crimes that come close to it, which in this case would be burglary and assualt. However, this graph can be improved again if instead of using the physical numbers, we use the total crime RATE of each of the neighborhoods.
###Code
# Thanks to this new data set, we are given the population, and as a result the crime rate for each neighborhood.
# Now we can plot this on it's own graph.
crime_rate = major_arrests['overall_crime_rate']
crime_rate.plot(kind="bar", figsize=(20,10))
plt.title("Total Crime Rate per Neighborhood")
plt.xlabel("Neighborhood (Represented by Number)")
plt.ylabel("Amount")
###Output
_____no_output_____
###Markdown
Getting the Lowest rates of Crime from the GraphNow, given this graph, there is still a lot of data, so we can break it down into the neighborhoods that have the lowest crime rates. This will make the data more presentable and easier to digest.
###Code
# Here, we create a new dictionary for the neighborhoods with the lowest crime rates. We will loop through each neighbohood
# and check to see if it's crime rate is less than a certain threshold, in this case, 2. From here we add a new key/value
# pair, the neighborhood and its cooresponding crime rate. Then, we trasnfer that to a Pandas Series so for ease of
# graphing later on.
low_rates = {}
for x in range(len(crime_rate)):
if crime_rate[x] < 2:
if crime_rate[x] not in low_rates:
low_rates[major_arrests['neighborhood'][x]] = crime_rate[x]
low_rates_dataframe = pd.Series(low_rates)
print(low_rates_dataframe)
###Output
Banksville 1.94
Beechview 1.98
Brighton Heights 1.64
Brookline 1.91
Carrick 1.98
Crafton Heights - Westwood - Oakwood 1.41
East Carnegie 1.82
Elliot 1.70
Harpen Hilltop 0.99
Morningside 1.61
Perry North 1.84
Shadeland Halls Grove 1.79
Sheraton Chartiers 1.96
Southside Slopes 1.82
Swisshelm Park 1.72
Troy Hill 1.78
Upper Lawrenceville 1.80
dtype: float64
###Markdown
Lowest Crime RatesThis data provides us with the neighborhoods with the lowest crime rate, calculated from the major arrests in each area. From here, we can graph this data and find the neighborhood(s) with the lowest amount of crime per populus.
###Code
low_rates_dataframe.plot.bar(figsize=(7, 7))
plt.title("Lowest Crime Rates per Neighborhood")
plt.xlabel("Neighborhood")
plt.ylabel("Amount")
###Output
_____no_output_____
###Markdown
Conclusion From the DataAfter looking at total crime rate, we can see that our outcome is completely different! Now, our neighborhoods with the lowest amount of major crimes are Morningside, Crafton Heights - Westwood - Oakwood, and Elliot. Now, we can take this information and combine it with our other metrics to get a more refined answer to our question. Data Set 3--- Fire Incidents in the City of Pittsburgh
###Code
fireIncidents = pd.read_csv("fire-incidents-pittsburgh.csv")
fireIncidents.head()
fireIncidents["neighborhood"].value_counts().tail(50)
fireIncidents["neighborhood"].value_counts().tail(50).plot(kind="bar", figsize=(20, 10))
###Output
_____no_output_____
###Markdown
Conclusion---
###Code
leastFireNeighborhoods = []
for neighborhood in fireIncidents["neighborhood"].value_counts().tail(35).index.tolist():
if isinstance(neighborhood, str):
leastFireNeighborhoods.append(neighborhood)
print(leastFireNeighborhoods)
least_crime_neighborhood = []
for n in low_rates_dataframe.index.tolist():
least_crime_neighborhood.append(n)
print(least_crime_neighborhood)
most_parks = []
for p in parksData["neighborhood"].value_counts().head(35).index.tolist():
most_parks.append(p)
print(most_parks)
common_neighborhoods = []
for x in most_parks:
if x in least_crime_neighborhood: #most important
if x in leastFireNeighborhoods: #2nd important
if x not in common_neighborhoods: #least important
common_neighborhoods.append(x)
print(common_neighborhoods)
print(len(common_neighborhoods))
###Output
['Upper Lawrenceville', 'Swisshelm Park']
2
###Markdown
Introduction Our goal today is to determine the best neighborhood in Pittsburgh. There are many possible factors as to what can make a neighborhood the "best", but we determined that a general level of safety should be the top priority. In order to determine the safest neighborhood in Pittsburgh, we used data sets on fire and crime incidients to determine how 'dangerous' a certain neighborhood might be; we used a data set on the number of parks to determine the general quality of an overall neighborhood. The "best" neighborhood would appear very low in the crime and fire data sets and high in the parks data set. General Metric Our general metric is based primarily around safety; however, in order to determine a baseline quality of a neighborhood, we used a parks data set. We then weighed crime factors and fire incidents into the data sets to determine the safety of neighborhoods based around their parks. Data Set 1: Parks--- Number of Parks in the City of Pittsburgh
###Code
import pandas as pd
import numpy as np
import geopandas
%matplotlib inline
import matplotlib.pyplot as plt
# Importing Data
parksData = pd.read_csv("parks-pittsburgh.csv")
parksData.head(10)
typeOf = parksData['type'].value_counts()
typeOf.head(10)
###Output
_____no_output_____
###Markdown
Graphing amount of Parks per Neighborhood
###Code
# Get data related to arrests within certain neighborhoods
rateOfPark = parksData['neighborhood'].value_counts()
rateOfPark.plot(kind="bar", ylim=[0,14], figsize=(20,20))
plt.title("Amount of Parks per Neighborhood")
plt.xlabel("Neighborhood")
plt.ylabel("Amount of Parks")
neighborhoods = geopandas.read_file("Neighborhoods/Neighborhoods_.shp")
###Output
_____no_output_____
###Markdown
Analysis The amount of parks per neighborhood is an indicator of how good a neighborhood is. So, the more parks there are in any neighborhood, the quality of that neighborhood would then be higher. As seen as the graph above, East Liberty has 12 Parks, making it (according to this very specific standard), the best neighborhood in Pittsburgh. Data Set 2: Crime--- Police Arrests in the City of Pittsburgh
###Code
# Import data
police_arrests = pd.read_csv("police-arrest.csv")
police_arrests.head(10)
# Get data related to arrests within certain neighborhoods, then plot that data
rates = police_arrests['INCIDENTNEIGHBORHOOD'].value_counts()
rates.tail(20).plot(kind="bar", ylim=[0,120])
plt.title("Lowest Crimes per Neighborhoods")
plt.xlabel("Neighborhood")
plt.ylabel("Amount")
neighborhoods = geopandas.read_file("Neighborhoods/Neighborhoods_.shp")
###Output
_____no_output_____
###Markdown
Graphical DataGoing simply off of this graph, the two neighborhoods with the lowest amount of arrests would be Troy Hill-Herrs Island and Mt. Oliver Neighborhood. Although, the number of arrests within a neighborhood is a very specific metric. There are other things that will determine what should be considered as the best Pittsburgh neighborhood to live in. To start looking at this, we will broaden our data. Instead of looking at the number of arrests, we will look at a different data set which talks about general crime rate for major crimes.
###Code
# Read in new data set, print out first 10 instances
major_arrests = pd.read_csv("arrests-for-major-crimes-1972.csv")
major_arrests.head(10)
# Here, we are assigning each major type of crime to its own Pandas Series. These series will contain a number representing
# a specific neighborhood (alphabetical order cooreseponds to numerical order) as well as a value for how many arrests there
# were for that particular type of crime. From there, this data will be plotted onto the same type so we can see which type
# of crime is most common in which neighborhood.
arrests_murder = major_arrests['number_arrests_murder']
arrests_rape = major_arrests['number_arrests_rape']
arrests_robbery = major_arrests['number_arrests_robbery']
arrests_assault = major_arrests['number_arrests_assault']
arrests_burglary = major_arrests['number_arrests_burglary']
arrests_larceny = major_arrests['number_arrests_larceny']
arrests_total = pd.concat([arrests_murder, arrests_rape, arrests_robbery, arrests_assault, arrests_burglary, arrests_larceny], axis=1)
arrests_total.plot(figsize=(10, 10))
plt.title("Major Crimes per Neighborhood")
plt.xlabel("Neighborhood (Represented by Number)")
plt.ylabel("Amount")
###Output
_____no_output_____
###Markdown
From this graph, we can see that typically larceny, or theft of personal property, is much higher than the other major crimes that come close to it, which in this case would be burglary and assualt. However, this graph can be improved again if instead of using the physical numbers, we use the total crime RATE of each of the neighborhoods.
###Code
# Thanks to this new data set, we are given the population, and as a result the crime rate for each neighborhood.
# Now we can plot this on it's own graph.
crime_rate = major_arrests['overall_crime_rate']
crime_rate.plot(kind="bar", figsize=(20,10))
plt.title("Total Crime Rate per Neighborhood")
plt.xlabel("Neighborhood (Represented by Number)")
plt.ylabel("Amount")
###Output
_____no_output_____
###Markdown
Getting the Lowest rates of Crime from the GraphNow, given this graph, there is still a lot of data, so we can break it down into the neighborhoods that have the lowest crime rates. This will make the data more presentable and easier to digest.
###Code
# Here, we create a new dictionary for the neighborhoods with the lowest crime rates. We will loop through each neighbohood
# and check to see if it's crime rate is less than a certain threshold, in this case, 2. From here we add a new key/value
# pair, the neighborhood and its cooresponding crime rate. Then, we trasnfer that to a Pandas Series so for ease of
# graphing later on.
low_rates = {}
for x in range(len(crime_rate)):
if crime_rate[x] < 2:
if crime_rate[x] not in low_rates:
low_rates[major_arrests['neighborhood'][x]] = crime_rate[x]
low_rates_dataframe = pd.Series(low_rates)
print(low_rates_dataframe)
###Output
Banksville 1.94
Beechview 1.98
Brighton Heights 1.64
Brookline 1.91
Carrick 1.98
Crafton Heights - Westwood - Oakwood 1.41
East Carnegie 1.82
Elliot 1.70
Harpen Hilltop 0.99
Morningside 1.61
Perry North 1.84
Shadeland Halls Grove 1.79
Sheraton Chartiers 1.96
Southside Slopes 1.82
Swisshelm Park 1.72
Troy Hill 1.78
Upper Lawrenceville 1.80
dtype: float64
###Markdown
Lowest Crime RatesThis data provides us with the neighborhoods with the lowest crime rate, calculated from the major arrests in each area. From here, we can graph this data and find the neighborhood(s) with the lowest amount of crime per populus.
###Code
low_rates_dataframe.plot.bar(figsize=(7, 7))
plt.title("Lowest Crime Rates per Neighborhood")
plt.xlabel("Neighborhood")
plt.ylabel("Amount")
###Output
_____no_output_____
###Markdown
Conclusion From the DataAfter looking at total crime rate, we can see that our outcome is completely different! Now, our neighborhoods with the lowest amount of major crimes are Morningside, Crafton Heights - Westwood - Oakwood, and Elliot. Now, we can take this information and combine it with our other metrics to get a more refined answer to our question. Data Set 3: Fires--- Fire Incidents in the City of Pittsburgh
###Code
fireIncidents = pd.read_csv("fire-incidents-pittsburgh.csv")
fireIncidents.head()
fireIncidents["neighborhood"].value_counts().tail(50)
fireIncidents["neighborhood"].value_counts().tail(50).plot(kind="bar", figsize=(20, 10))
###Output
_____no_output_____
###Markdown
Conclusion---
###Code
leastFireNeighborhoods = []
for neighborhood in fireIncidents["neighborhood"].value_counts().tail(35).index.tolist():
if isinstance(neighborhood, str):
leastFireNeighborhoods.append(neighborhood)
print(leastFireNeighborhoods)
least_crime_neighborhood = []
for n in low_rates_dataframe.index.tolist():
least_crime_neighborhood.append(n)
print(least_crime_neighborhood)
most_parks = []
for p in parksData["neighborhood"].value_counts().head(35).index.tolist():
most_parks.append(p)
print(most_parks)
common_neighborhoods = []
for x in most_parks:
if x in least_crime_neighborhood: #most important
if x in leastFireNeighborhoods: #2nd important
if x not in common_neighborhoods: #least important
common_neighborhoods.append(x)
print(common_neighborhoods)
print(len(common_neighborhoods))
###Output
['Upper Lawrenceville', 'Swisshelm Park']
2
|
04_Apply/Students_Alcohol_Consumption/My_Exercises.ipynb | ###Markdown
Student Alcohol Consumption Introduction:This time you will download a dataset from the UCI. Step 1. Import the necessary libraries
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/04_Apply/Students_Alcohol_Consumption/student-mat.csv). Step 3. Assign it to a variable called df.
###Code
df = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/04_Apply/Students_Alcohol_Consumption/student-mat.csv')
df.head(5)
###Output
_____no_output_____
###Markdown
Step 4. For the purpose of this exercise slice the dataframe from 'school' until the 'guardian' column
###Code
df.loc[:, 'school':'guardian']
###Output
_____no_output_____
###Markdown
Step 5. Create a lambda function that will capitalize strings.
###Code
cap = lambda x: x.capitalize()
###Output
_____no_output_____
###Markdown
Step 6. Capitalize both Mjob and Fjob
###Code
df['Mjob'].apply(cap), df['Fjob'].apply(cap)
###Output
_____no_output_____
###Markdown
Step 7. Print the last elements of the data set.
###Code
df.tail(1)
###Output
_____no_output_____
###Markdown
Step 8. Did you notice the original dataframe is still lowercase? Why is that? Fix it and capitalize Mjob and Fjob. Step 9. Create a function called majority that returns a boolean value to a new column called legal_drinker (Consider majority as older than 17 years old)
###Code
def majority(x):
if x > 17:
return True
else:
return False
df['legal_drinker'] = df['age'].apply(majority)
###Output
_____no_output_____
###Markdown
Step 10. Multiply every number of the dataset by 10. I know this makes no sense, don't forget it is just an exercise
###Code
def times10(x):
if isinstance(x, (int, float)):
return 10 * x
return x
df.applymap(times10)
###Output
_____no_output_____ |
nasa-turbofan-rul-xgboost/notebooks/5 - Hyperparameter tuning.ipynb | ###Markdown
Predictive Maintenance using Machine Learning on Sagemaker*Part 4 - Hyperparameter tuning* Initialization---Directory structure to run this notebook:```nasa-turbofan-rul-xgboost|+--- data| || +--- interim: intermediate data we can manipulate and process| || \--- raw: *immutable* data downloaded from the source website|+--- notebooks: all the notebooks are positionned here``` Imports
###Code
import matplotlib.pyplot as plt
import time
import sagemaker
import boto3
import seaborn as sns
import pandas as pd
import time
import utils
from sagemaker.amazon.amazon_estimator import get_image_uri
from sagemaker.predictor import RealTimePredictor, csv_serializer, csv_deserializer
from IPython.display import clear_output
from datetime import datetime
from sklearn.externals import joblib
from sklearn.metrics import confusion_matrix, accuracy_score, r2_score, roc_auc_score, precision_score, recall_score, f1_score
sns.set_style('darkgrid')
!pip install tabulate
import tabulate
###Output
Requirement already satisfied: tabulate in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (0.8.7)
[33mYou are using pip version 10.0.1, however version 20.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
###Markdown
Loading data from the previous notebook
###Code
%store -r
# If the data are not present in the notebook local storage, we need to load them from disk:
success_msg = 'Loaded "test_data_scaled", "reg_results_df" and "cls_results_df"'
if 'test_data_scaled' not in locals():
print('Data not available from previous notebook, loading from disk.')
try:
local_path = '../data/interim'
test_data_scaled = pd.read_csv(os.path.join(local_path, 'test_data_scaled.csv'))
reg_results_df = pd.read_csv(os.path.join(local_path, 'reg_results_df.csv'))
cls_results_df = pd.read_csv(os.path.join(local_path, 'cls_results_df.csv'))
print(success_msg)
except Exception as e:
if (e.errno == errno.ENOENT):
print('Files not found to load data from: you need to execute the previous notebook.')
else:
print(success_msg)
###Output
Loaded "test_data_scaled", "reg_results_df" and "cls_results_df"
###Markdown
Set general information about this SageMaker session
###Code
role = sagemaker.get_execution_role()
session = sagemaker.Session()
bucket_name = session.default_bucket()
region = boto3.Session().region_name
prefix = 'nasa_engine_data'
# Fetch the training container for SageMaker built-in XGBoost algorithm:
xgb_container = get_image_uri(region, 'xgboost', '0.90-1')
###Output
_____no_output_____
###Markdown
Hyperparameter tuning job--- Optimizing the classification model Defining the model and the tuning jobBefore launching a tuning job, we need to define the base Estimator object that will be used. This is similar to the Estimator object created during a single training occurrence:
###Code
# Build a training job name:
training_job_name = 'xgboost-nasa-health'
# Build the estimator object:
model_artifacts_path = 's3://{}/{}/output'.format(bucket_name, prefix)
xgb_classification_estimator = sagemaker.estimator.Estimator(
image_name=xgb_container,
role=role,
train_instance_count=1,
train_instance_type='ml.m5.large',
output_path=model_artifacts_path,
sagemaker_session=session,
base_job_name=training_job_name
)
# Link hyperparameters to this estimator:
xgb_classification_estimator.set_hyperparameters(
max_depth=6, # Max depth of a given tree
eta=0.3, # Step size shrinkage used in updates to prevent overfitting
gamma=0, # Minimum loss reduction required to make a further partition on a leaf node of the tree
min_child_weight=1, # Minimum sum of instance weight (hessian) needed in a child
subsample=1.0, # Subsample ratio of the training instance
silent=0, # 0 means print running messages
objective='binary:logistic', # Learning task and learning objective
num_round=40 # The number of rounds to run the training
)
###Output
_____no_output_____
###Markdown
Now we need to define the hyperparameters tuning job exploration ranges. You can refer yourself to [this page](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost-tuning.html) to understand the different metrics computed by the XGBoost algorithm and the associated optimization direction:
###Code
from sagemaker.tuner import HyperparameterTuner, IntegerParameter, ContinuousParameter
xgb_classification_tuner = HyperparameterTuner(
estimator = xgb_classification_estimator, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:error', # The metric used to compare trained models (classification error in this case).
objective_type = 'Minimize', # We wish to minimize this error metric.
max_jobs = 10, # The total number of models to train
max_parallel_jobs = 2, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 10),
'eta': ContinuousParameter(0.01, 0.3),
'gamma': ContinuousParameter(0, 10),
'subsample': ContinuousParameter(0.5, 1.0),
'colsample_bytree': ContinuousParameter(0.5, 1.0)
},
base_tuning_job_name = training_job_name + '-tuner'
)
###Output
_____no_output_____
###Markdown
We still need to encapsulate the inputs in S3 objects to make sure content type is correctly identified by the algorithm:
###Code
s3_input_train = sagemaker.s3_input(s3_data='s3://{}/{}/input/training_classification.csv'.format(bucket_name, prefix), content_type='text/csv')
s3_input_validation = sagemaker.s3_input(s3_data='s3://{}/{}/input/validation_classification.csv'.format(bucket_name, prefix), content_type='text/csv')
###Output
_____no_output_____
###Markdown
We can now launch the hyperparameter job by training the tuner created previously. Note that by default the command gives back control to the notebook and does not wait for the tuning job to be finished.
###Code
xgb_classification_tuner.fit(inputs={'train': s3_input_train, 'validation': s3_input_validation}, logs=True)
###Output
_____no_output_____
###Markdown
The following cell illustrates how to follow what is going on in the background. It provides similar insights than what can be visible in the AWS Console for the SageMaker Hyperparameter Tuning Jobs section:
###Code
status = utils.get_tuner_status(xgb_classification_tuner._current_job_name)
if (status != 'Completed'):
print('Hyperparameter job not completed: ' + status)
###Output
Tuning job in progress (status: InProgress)
8 training jobs are complete:
+----+-----------------------------------------------+-----------------------+-----------+----------+-------------+-------------+--------------------+
| | TrainingJobName | FinalObjectiveValue | eta | gamma | max_depth | subsample | colsample_bytree |
|----+-----------------------------------------------+-----------------------+-----------+----------+-------------+-------------+--------------------|
| 2 | xgboost-nasa-health--200515-1220-008-7de0f75a | 0.046523 | 0.0203725 | 0.309479 | 3 | 0.980986 | 0.70999 |
| 3 | xgboost-nasa-health--200515-1220-007-8b819baa | 0.049673 | 0.0163843 | 9.46931 | 8 | 0.874694 | 0.977721 |
| 4 | xgboost-nasa-health--200515-1220-006-84f06328 | 0.058638 | 0.274152 | 0.103553 | 9 | 0.830779 | 0.674763 |
| 5 | xgboost-nasa-health--200515-1220-005-7e8af942 | 0.050157 | 0.0101384 | 0.203668 | 8 | 0.535521 | 0.935414 |
| 6 | xgboost-nasa-health--200515-1220-004-b304ec19 | 0.052338 | 0.129211 | 1.23098 | 10 | 0.870726 | 0.619252 |
| 7 | xgboost-nasa-health--200515-1220-003-16aec8a4 | 0.055246 | 0.290914 | 5.56204 | 3 | 0.778685 | 0.777953 |
| 9 | xgboost-nasa-health--200515-1220-001-b114a3df | 0.047008 | 0.060496 | 0.209158 | 3 | 0.84528 | 0.813776 |
| 8 | xgboost-nasa-health--200515-1220-002-e86d7663 | 0.053792 | 0.146725 | 5.84248 | 5 | 0.598682 | 0.90188 |
+----+-----------------------------------------------+-----------------------+-----------+----------+-------------+-------------+--------------------+
###Markdown
Deploying an endpoint to query the best model foundNow that the hyperparameter tuning job is finished, we can deploy an endpoint to query the best model found:
###Code
cls_endpoint_name = 'xgboost-nasa-classification-v2-{}'.format(datetime.now().strftime("%Y%m%d-%H%M%S"))
print('The following endpoint will be created for the best estimator previously found:', cls_endpoint_name)
# Get the training job descriptor to extract the location of the
# model artifacts on S3 and the container for the algorithm image:
# Collect the best training job name and associated estimator:
sm_client = boto3.client('sagemaker')
best_xgb_training_job = xgb_classification_tuner.best_training_job()
training_job_description = sm_client.describe_training_job(TrainingJobName=best_xgb_training_job)
model_artifacts = training_job_description['ModelArtifacts']['S3ModelArtifacts']
training_image = training_job_description['AlgorithmSpecification']['TrainingImage']
# Create a deployable model:
sm_client.create_model(
ModelName = cls_endpoint_name + '-model',
ExecutionRoleArn = role,
PrimaryContainer = {
'Image': training_image,
'ModelDataUrl': model_artifacts
}
)
# Creates the endpoint configuration: this entity describes the distribution of traffic
# across the models, whether split, shadowed, or sampled in some way:
sm_client.create_endpoint_config(
EndpointConfigName = cls_endpoint_name + '-endpoint-cfg',
ProductionVariants=[{
'InstanceType': 'ml.t2.medium',
'InitialVariantWeight': 1,
'InitialInstanceCount': 1,
'ModelName': cls_endpoint_name + '-model',
'VariantName':'AllTraffic'
}]
)
# Creates the endpoint: this entity serves up the model, through
# specifying the name and configuration defined above:
xgb_classification_predictor = sm_client.create_endpoint(
EndpointName = cls_endpoint_name,
EndpointConfigName = cls_endpoint_name + '-endpoint-cfg'
)
###Output
The following endpoint will be created for the best estimator previously found: xgboost-nasa-classification-v2-20200515-123805
###Markdown
Let's wait for the endpoint to be created:
###Code
status = sm_client.describe_endpoint(EndpointName=cls_endpoint_name)['EndpointStatus']
print('Endpoint creation in progress... ', end='')
while status == 'Creating':
time.sleep(60)
print('#', end='')
status = sm_client.describe_endpoint(EndpointName=cls_endpoint_name)['EndpointStatus']
if (status == 'InService'):
print('\nEndpoint creation successful.')
# Build a predictor object which will receive our prediction requests:
xgb_classification_predictor = RealTimePredictor(
endpoint = cls_endpoint_name,
sagemaker_session = session,
serializer = csv_serializer,
deserializer = csv_deserializer
)
else:
print('\nEndpoint creation failed.')
###Output
Endpoint creation in progress... ##########
Endpoint creation successful.
###Markdown
Get prediction for the test data
###Code
# We want to estimate the healthy of each unit:
tuned_cls_results_df = pd.DataFrame(columns=['unit_number', 'real_label', 'predicted_label'])
# Loops through each unit:
for i in range(0, 100):
# Get all the test data for the current unit:
row = test_data_scaled[test_data_scaled['unit_number'] == (i+1)].iloc[-1, :]
# Join them together into a list before sending them to the predictor:
test_sample = ', '.join(row[X_train.columns].map(str).tolist())
prediction = xgb_classification_predictor.predict(test_sample)
# Add the result for the current unit to the results dataframe:
tuned_cls_results_df = tuned_cls_results_df.append({
'unit_number': (i+1),
'real_label': row['label'],
'predicted_label': round(float(prediction[0][0]))
}, ignore_index=True)
tuned_cls_results_df = tuned_cls_results_df.set_index('unit_number')
tuned_cls_results_df.head()
###Output
_____no_output_____
###Markdown
Classification scores
###Code
y = cls_results_df['real_label']
yhat = cls_results_df['predicted_label']
tuned_y = tuned_cls_results_df['real_label']
tuned_yhat = tuned_cls_results_df['predicted_label']
cls_results_comparison = pd.DataFrame(columns=['Metric', 'Initial model', 'Tuned model'])
cls_results_comparison = cls_results_comparison.append({
'Metric': 'Accuracy',
'Initial model': round(accuracy_score(y, yhat), 2),
'Tuned model': round(accuracy_score(tuned_y, tuned_yhat), 2),
}, ignore_index=True)
cls_results_comparison = cls_results_comparison.append({
'Metric': 'Roc AuC',
'Initial model': round(roc_auc_score(y, yhat), 2),
'Tuned model': round(roc_auc_score(tuned_y, tuned_yhat), 2),
}, ignore_index=True)
cls_results_comparison = cls_results_comparison.append({
'Metric': 'Precision',
'Initial model': round(precision_score(y, yhat), 2),
'Tuned model': round(precision_score(tuned_y, tuned_yhat), 2),
}, ignore_index=True)
cls_results_comparison = cls_results_comparison.append({
'Metric': 'Recall',
'Initial model': round(recall_score(y, yhat), 2),
'Tuned model': round(recall_score(tuned_y, tuned_yhat), 2),
}, ignore_index=True)
cls_results_comparison = cls_results_comparison.append({
'Metric': 'F1 Score',
'Initial model': round(f1_score(y, yhat, 'binary'), 2),
'Tuned model': round(f1_score(tuned_y, tuned_yhat, 'binary'), 2),
}, ignore_index=True)
cls_results_comparison
###Output
_____no_output_____
###Markdown
Confusion matrix
###Code
cm1 = confusion_matrix(y, yhat)
fig1 = utils.print_confusion_matrix(cm1, ['healthy', 'not healthy'], figsize = (6,4))
cm2 = confusion_matrix(tuned_y, tuned_yhat)
fig2 = utils.print_confusion_matrix(cm2, ['healthy', 'not healthy'], figsize = (6,4))
###Output
_____no_output_____
###Markdown
Optimizing the regression model Defining the model and the tuning jobBefore launching a tuning job, we need to define the base Estimator object that will be used. This is similar to the Estimator object created during a single training occurrence:
###Code
# Build a training job name:
training_job_name = 'xgboost-nasa-rul'
# Build the estimator object:
model_artifacts_path = 's3://{}/{}/output'.format(bucket_name, prefix)
xgb_regression_estimator = sagemaker.estimator.Estimator(
image_name=xgb_container,
role=role,
train_instance_count=1,
train_instance_type='ml.m5.large',
output_path=model_artifacts_path,
sagemaker_session=session,
base_job_name=training_job_name
)
# Link hyperparameters to this estimator:
xgb_regression_estimator.set_hyperparameters(
max_depth=6, # Max depth of a given tree
eta=0.3, # Step size shrinkage used in updates to prevent overfitting
gamma=0, # Minimum loss reduction required to make a further partition on a leaf node of the tree
min_child_weight=1, # Minimum sum of instance weight (hessian) needed in a child
subsample=1.0, # Subsample ratio of the training instance
silent=0, # 0 means print running messages
objective='reg:squarederror', # Learning task and learning objective
num_round=40 # The number of rounds to run the training
)
###Output
_____no_output_____
###Markdown
Now we need to define the hyperparameters tuning job exploration ranges. You can refer yourself to [this page](https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost-tuning.html) to understand the different metrics computed by the XGBoost algorithm and the associated optimization direction:
###Code
from sagemaker.tuner import HyperparameterTuner, IntegerParameter, ContinuousParameter
xgb_regression_tuner = HyperparameterTuner(
estimator = xgb_regression_estimator, # The estimator object to use as the basis for the training jobs.
objective_metric_name = 'validation:rmse', # The metric used to compare trained models (RMSE in this case).
objective_type = 'Minimize', # We wish to minimize this error metric.
max_jobs = 10, # The total number of models to train
max_parallel_jobs = 2, # The number of models to train in parallel
hyperparameter_ranges = {
'max_depth': IntegerParameter(3, 10),
'eta': ContinuousParameter(0.01, 0.3),
'gamma': ContinuousParameter(0, 10),
'subsample': ContinuousParameter(0.5, 1.0),
'colsample_bytree': ContinuousParameter(0.5, 1.0)
},
base_tuning_job_name = training_job_name + '-tuner'
)
###Output
_____no_output_____
###Markdown
We still need to encapsulate the inputs in S3 objects to make sure content type is correctly identified by the algorithm:
###Code
s3_input_train = sagemaker.s3_input(s3_data='s3://{}/{}/input/training_regression.csv'.format(bucket_name, prefix), content_type='text/csv')
s3_input_validation = sagemaker.s3_input(s3_data='s3://{}/{}/input/validation_regression.csv'.format(bucket_name, prefix), content_type='text/csv')
###Output
_____no_output_____
###Markdown
We can now launch the hyperparameter job by training the tuner created previously. Note that by default the command gives back control to the notebook and does not wait for the tuning job to be finished.
###Code
xgb_regression_tuner.fit(inputs={'train': s3_input_train, 'validation': s3_input_validation}, logs=True)
###Output
_____no_output_____
###Markdown
The following cell illustrates how to follow what is going on in the background. It provides similar insights than what can be visible in the AWS Console for the SageMaker Hyperparameter Tuning Jobs section:
###Code
status = utils.get_tuner_status(xgb_regression_tuner._current_job_name)
if (status != 'Completed'):
print('Hyperparameter job not completed: ' + status)
###Output
Tuning job in progress (status: InProgress)
9 training jobs are complete:
+----+-----------------------------------------------+-----------------------+-----------+----------+-------------+-------------+--------------------+
| | TrainingJobName | FinalObjectiveValue | eta | gamma | max_depth | subsample | colsample_bytree |
|----+-----------------------------------------------+-----------------------+-----------+----------+-------------+-------------+--------------------|
| 2 | xgboost-nasa-rul-tun-200515-1248-008-9dc283af | 44.0584 | 0.284479 | 0.865915 | 4 | 0.569478 | 0.512512 |
| 3 | xgboost-nasa-rul-tun-200515-1248-007-4f76e2b4 | 43.0794 | 0.26254 | 0.484751 | 3 | 0.68832 | 0.500002 |
| 5 | xgboost-nasa-rul-tun-200515-1248-005-ef630788 | 43.3705 | 0.298506 | 8.62591 | 3 | 0.707375 | 0.562393 |
| 4 | xgboost-nasa-rul-tun-200515-1248-006-638ff70f | 42.9373 | 0.265062 | 4.95954 | 3 | 0.770571 | 0.522064 |
| 6 | xgboost-nasa-rul-tun-200515-1248-004-37c12ffc | 43.5286 | 0.284886 | 5.99939 | 4 | 0.669985 | 0.550384 |
| 7 | xgboost-nasa-rul-tun-200515-1248-003-a65b9da3 | 47.3927 | 0.243149 | 8.20276 | 9 | 0.577751 | 0.840972 |
| 9 | xgboost-nasa-rul-tun-200515-1248-001-3eb392d6 | 46.314 | 0.0765576 | 9.64572 | 10 | 0.552244 | 0.876246 |
| 8 | xgboost-nasa-rul-tun-200515-1248-002-1ab1ca8d | 48.089 | 0.0581141 | 4.428 | 10 | 0.829879 | 0.950419 |
+----+-----------------------------------------------+-----------------------+-----------+----------+-------------+-------------+--------------------+
###Markdown
Deploying an endpoint to query the best model foundNow that the hyperparameter tuning job is finished, we can deploy an endpoint to query the best model found:
###Code
reg_endpoint_name = 'xgboost-nasa-regression-v2-{}'.format(datetime.now().strftime("%Y%m%d-%H%M%S"))
print('The following endpoint will be created for the best estimator previously found:', reg_endpoint_name)
# Get the training job descriptor to extract the location of the
# model artifacts on S3 and the container for the algorithm image:
# Collect the best training job name and associated estimator:
sm_client = boto3.client('sagemaker')
best_xgb_training_job = xgb_regression_tuner.best_training_job()
training_job_description = sm_client.describe_training_job(TrainingJobName=best_xgb_training_job)
model_artifacts = training_job_description['ModelArtifacts']['S3ModelArtifacts']
training_image = training_job_description['AlgorithmSpecification']['TrainingImage']
# Create a deployable model:
sm_client.create_model(
ModelName = reg_endpoint_name + '-model',
ExecutionRoleArn = role,
PrimaryContainer = {
'Image': training_image,
'ModelDataUrl': model_artifacts
}
)
# Creates the endpoint configuration: this entity describes the distribution of traffic
# across the models, whether split, shadowed, or sampled in some way:
sm_client.create_endpoint_config(
EndpointConfigName = reg_endpoint_name + '-endpoint-cfg',
ProductionVariants=[{
'InstanceType': 'ml.t2.medium',
'InitialVariantWeight': 1,
'InitialInstanceCount': 1,
'ModelName': reg_endpoint_name + '-model',
'VariantName':'AllTraffic'
}]
)
# Creates the endpoint: this entity serves up the model, through
# specifying the name and configuration defined above:
xgb_regression_predictor = sm_client.create_endpoint(
EndpointName = reg_endpoint_name,
EndpointConfigName = reg_endpoint_name + '-endpoint-cfg'
)
###Output
The following endpoint will be created for the best estimator previously found: xgboost-nasa-regression-v2-20200515-130516
###Markdown
Let's wait for the endpoint to be created:
###Code
status = sm_client.describe_endpoint(EndpointName=reg_endpoint_name)['EndpointStatus']
print('Endpoint creation in progress... ', end='')
while status == 'Creating':
time.sleep(60)
print('#', end='')
status = sm_client.describe_endpoint(EndpointName=reg_endpoint_name)['EndpointStatus']
if (status == 'InService'):
print('\nEndpoint creation successful.')
# Build a predictor object which will receive our prediction requests:
xgb_regression_predictor = RealTimePredictor(
endpoint = reg_endpoint_name,
sagemaker_session = session,
serializer = csv_serializer,
deserializer = csv_deserializer
)
else:
print('\nEndpoint creation failed.')
###Output
Endpoint creation in progress... #########
Endpoint creation successful.
###Markdown
Get prediction for the test data
###Code
# We want to estimate the remaining useful lifetime for each unit:
tuned_reg_results_df = pd.DataFrame(columns=['unit_number', 'real_rul', 'predicted_rul'])
# Loops through each unit:
for i in range(0, 100):
# Get all the test data for the current unit:
row = test_data_scaled[test_data_scaled['unit_number'] == (i+1)].iloc[-1, :]
# Join them together into a list before sending them to the predictor:
test_sample = ', '.join(row[X_train.columns].map(str).tolist())
prediction = xgb_regression_predictor.predict(test_sample)
# Add the result for the current unit to the results dataframe:
tuned_reg_results_df = tuned_reg_results_df.append({
'unit_number': (i+1),
'real_rul': row['rul'],
'predicted_rul': float(prediction[0][0])
}, ignore_index=True)
tuned_reg_results_df = tuned_reg_results_df.set_index('unit_number')
tuned_reg_results_df.head()
###Output
_____no_output_____
###Markdown
We can use the following dataframe to perform some error analysis:
###Code
df = pd.merge(reg_results_df, tuned_reg_results_df, how='inner', left_index=True, right_index=True)
df = df[['real_rul_x', 'predicted_rul_x', 'predicted_rul_y']]
df.columns = ['Real RUL', 'Initial Prediction', 'After HP Tuning']
df.head(10)
from sklearn.metrics import r2_score
print('R² (initial):', r2_score(reg_results_df['real_rul'], reg_results_df['predicted_rul']))
print('R² (after hyperparameter tuning):', r2_score(tuned_reg_results_df['real_rul'], tuned_reg_results_df['predicted_rul']))
figure=plt.figure(figsize=(10,10))
chart = sns.scatterplot(x=reg_results_df['real_rul'], y=reg_results_df['predicted_rul'], s=100, alpha=0.3, label='Initial training');
chart = sns.scatterplot(x=tuned_reg_results_df['real_rul'], y=tuned_reg_results_df['predicted_rul'], s=50, alpha=0.6, color='#CC0000', label='Hyperparameters tuned');
chart.set_title('NASA Turbofan Engine Faults - Prediction vs. Real RUL');
chart.legend(loc='lower right');
###Output
_____no_output_____
###Markdown
Conclusion---* The **classification algorithm** did not get any improvement from the hyperparameter tuning job, although a larger scope of exploration might help.* Concerning the **regression algorithm**: although the R² was improved by ~7 points (reaching a mere 50.5%), we are still very far from a usable model in production. In the next series of notebooks, we will consider timeseries analysis to address this topic. Cleanup
###Code
# Deleting the classification endpoint:
sm_client.delete_endpoint(EndpointName=cls_endpoint_name);
sm_client.delete_endpoint_config(EndpointConfigName=cls_endpoint_name + '-endpoint-cfg');
sm_client.delete_model(ModelName=cls_endpoint_name + '-model');
# Deleting the regression endpoint:
sm_client.delete_endpoint(EndpointName=reg_endpoint_name);
sm_client.delete_endpoint_config(EndpointConfigName=reg_endpoint_name + '-endpoint-cfg');
sm_client.delete_model(ModelName=reg_endpoint_name + '-model');
###Output
_____no_output_____ |
ebook/ebook_mnist_cw2_pytorch.ipynb | ###Markdown
介绍如何在pytorch环境下,使用CW2算法攻击基于MNIST数据集预训练的CNN/MLP模型。运行该文件前,需要先运行指定文件生成对应的模型: cd tutorials python mnist_model_pytorch.py Jupyter notebook中使用Anaconda中的环境需要单独配置,默认情况下使用的是系统默认的Python环境,以使用advbox环境为例。首先在默认系统环境下执行以下命令,安装ipykernel。 conda install ipykernel conda install -n advbox ipykernel在advbox环境下激活,这样启动后就可以在界面上看到advbox了。 python -m ipykernel install --user --name advbox --display-name advbox
###Code
#调试开关
import logging
#logging.basicConfig(level=logging.INFO,format="%(filename)s[line:%(lineno)d] %(levelname)s %(message)s")
#logger=logging.getLogger(__name__)
import sys
import torch
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import torch.utils.data.dataloader as Data
from adversarialbox.adversary import Adversary
from adversarialbox.attacks.cw2_pytorch import CW_L2_Attack
from adversarialbox.models.pytorch import PytorchModel
from tutorials.mnist_model_pytorch import Net
TOTAL_NUM = 100
pretrained_model="tutorials/mnist-pytorch/net.pth"
loss_func = torch.nn.CrossEntropyLoss()
#使用MNIST测试数据集 随机挑选TOTAL_NUM个
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('tutorials/mnist-pytorch/data', train=False, download=True, transform=transforms.Compose([
transforms.ToTensor(),
])),
batch_size=1, shuffle=True)
# Define what device we are using
logging.info("CUDA Available: {}".format(torch.cuda.is_available()))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#这里有个需要注意的地方 cw的输出必须是logit层而不能是softmax层 否则极大概率梯度消失一直无法收敛
# Initialize the network
model = Net().to(device)
# Load the pretrained model
model.load_state_dict(torch.load(pretrained_model, map_location='cpu'))
# Set the model in evaluation mode. In this case this is for the Dropout layers
model.eval()
# advbox demo
m = PytorchModel(
model, loss_func,(0.0, 1.0),
channel_axis=1)
#实例化CW_L2_Attack
attack = CW_L2_Attack(m)
#设置分类数num_labels 最大迭代次数max_iterations 二分查找次数 binary_search_steps C初始化值 initial_const
attack_config = {"num_labels": 10,"max_iterations":1000,"binary_search_steps":4,"initial_const":100.0}
# use test data to generate adversarial examples
total_count = 0
fooling_count = 0
for i, data in enumerate(test_loader):
inputs, labels = data
inputs, labels=inputs.numpy(),labels.numpy()
total_count += 1
adversary = Adversary(inputs, labels[0])
# FGSM non-targeted attack
adversary = attack(adversary, **attack_config)
if adversary.is_successful():
fooling_count += 1
print(
'attack success, original_label=%d, adversarial_label=%d, count=%d'
% (labels, adversary.adversarial_label, total_count))
else:
print('attack failed, original_label=%d, count=%d' %
(labels, total_count))
if total_count >= TOTAL_NUM:
print(
"[TEST_DATASET]: fooling_count=%d, total_count=%d, fooling_rate=%f"
% (fooling_count, total_count,
float(fooling_count) / total_count))
break
print("cw2 attack done")
###Output
cuda
attack success, original_label=9, adversarial_label=4, count=1
attack success, original_label=3, adversarial_label=2, count=2
attack success, original_label=1, adversarial_label=7, count=3
attack success, original_label=5, adversarial_label=3, count=4
attack success, original_label=0, adversarial_label=2, count=5
attack success, original_label=6, adversarial_label=4, count=6
attack success, original_label=2, adversarial_label=8, count=7
attack success, original_label=1, adversarial_label=4, count=8
attack success, original_label=8, adversarial_label=9, count=9
attack success, original_label=1, adversarial_label=6, count=10
attack success, original_label=2, adversarial_label=1, count=11
attack success, original_label=4, adversarial_label=9, count=12
attack success, original_label=8, adversarial_label=3, count=13
attack success, original_label=7, adversarial_label=9, count=14
attack success, original_label=6, adversarial_label=5, count=15
attack success, original_label=1, adversarial_label=4, count=16
attack success, original_label=5, adversarial_label=9, count=17
attack success, original_label=7, adversarial_label=2, count=18
attack success, original_label=8, adversarial_label=2, count=19
attack success, original_label=1, adversarial_label=4, count=20
attack success, original_label=1, adversarial_label=4, count=21
attack success, original_label=4, adversarial_label=7, count=22
attack success, original_label=7, adversarial_label=3, count=23
attack success, original_label=9, adversarial_label=4, count=24
attack success, original_label=7, adversarial_label=9, count=25
attack success, original_label=6, adversarial_label=5, count=26
attack success, original_label=3, adversarial_label=9, count=27
attack success, original_label=1, adversarial_label=4, count=28
attack success, original_label=2, adversarial_label=7, count=29
attack success, original_label=1, adversarial_label=7, count=30
attack success, original_label=0, adversarial_label=9, count=31
attack success, original_label=7, adversarial_label=3, count=32
attack success, original_label=7, adversarial_label=3, count=33
attack success, original_label=8, adversarial_label=2, count=34
attack success, original_label=5, adversarial_label=3, count=35
attack success, original_label=0, adversarial_label=6, count=36
attack success, original_label=9, adversarial_label=4, count=37
attack success, original_label=4, adversarial_label=9, count=38
attack success, original_label=1, adversarial_label=4, count=39
attack success, original_label=5, adversarial_label=3, count=40
attack success, original_label=6, adversarial_label=5, count=41
attack success, original_label=0, adversarial_label=6, count=42
attack success, original_label=4, adversarial_label=9, count=43
attack success, original_label=6, adversarial_label=5, count=44
attack success, original_label=6, adversarial_label=0, count=45
attack success, original_label=4, adversarial_label=9, count=46
attack success, original_label=9, adversarial_label=4, count=47
attack success, original_label=3, adversarial_label=7, count=48
attack success, original_label=5, adversarial_label=3, count=49
attack success, original_label=8, adversarial_label=9, count=50
attack success, original_label=4, adversarial_label=9, count=51
attack success, original_label=2, adversarial_label=7, count=52
attack success, original_label=8, adversarial_label=2, count=53
attack success, original_label=2, adversarial_label=3, count=54
attack success, original_label=0, adversarial_label=6, count=55
attack success, original_label=4, adversarial_label=2, count=56
attack success, original_label=5, adversarial_label=8, count=57
attack success, original_label=3, adversarial_label=5, count=58
attack success, original_label=3, adversarial_label=5, count=59
attack success, original_label=3, adversarial_label=8, count=60
attack success, original_label=1, adversarial_label=4, count=61
attack success, original_label=1, adversarial_label=4, count=62
attack success, original_label=9, adversarial_label=4, count=63
attack success, original_label=4, adversarial_label=8, count=64
attack success, original_label=2, adversarial_label=3, count=65
attack success, original_label=6, adversarial_label=0, count=66
attack success, original_label=1, adversarial_label=6, count=67
attack success, original_label=5, adversarial_label=3, count=68
attack success, original_label=4, adversarial_label=6, count=69
attack success, original_label=2, adversarial_label=3, count=70
attack success, original_label=0, adversarial_label=2, count=71
attack success, original_label=1, adversarial_label=7, count=72
attack success, original_label=7, adversarial_label=9, count=73
attack success, original_label=7, adversarial_label=9, count=74
attack success, original_label=8, adversarial_label=2, count=75
attack success, original_label=1, adversarial_label=4, count=76
attack success, original_label=4, adversarial_label=8, count=77
attack success, original_label=1, adversarial_label=4, count=78
attack success, original_label=2, adversarial_label=7, count=79
attack success, original_label=6, adversarial_label=2, count=80
attack success, original_label=9, adversarial_label=4, count=81
attack success, original_label=0, adversarial_label=2, count=82
attack success, original_label=2, adversarial_label=8, count=83
attack success, original_label=5, adversarial_label=3, count=84
attack success, original_label=8, adversarial_label=5, count=85
attack success, original_label=1, adversarial_label=4, count=86
attack success, original_label=4, adversarial_label=9, count=87
attack success, original_label=0, adversarial_label=5, count=88
attack success, original_label=1, adversarial_label=6, count=89
attack success, original_label=0, adversarial_label=8, count=90
attack success, original_label=2, adversarial_label=3, count=91
attack success, original_label=4, adversarial_label=8, count=92
attack success, original_label=8, adversarial_label=3, count=93
attack success, original_label=1, adversarial_label=4, count=94
attack success, original_label=0, adversarial_label=9, count=95
attack success, original_label=1, adversarial_label=4, count=96
attack success, original_label=5, adversarial_label=3, count=97
attack success, original_label=7, adversarial_label=1, count=98
attack success, original_label=1, adversarial_label=7, count=99
attack success, original_label=9, adversarial_label=4, count=100
[TEST_DATASET]: fooling_count=100, total_count=100, fooling_rate=1.000000
cw2 attack done
###Markdown
介绍如何在pytorch环境下,使用CW2算法攻击基于MNIST数据集预训练的CNN/MLP模型。运行该文件前,需要先运行指定文件生成对应的模型: cd tutorials python mnist_model_pytorch.py Jupyter notebook中使用Anaconda中的环境需要单独配置,默认情况下使用的是系统默认的Python环境,以使用advbox环境为例。首先在默认系统环境下执行以下命令,安装ipykernel。 conda install ipykernel conda install -n advbox ipykernel在advbox环境下激活,这样启动后就可以在界面上看到advbox了。 python -m ipykernel install --user --name advbox --display-name advbox
###Code
#调试开关
import logging
#logging.basicConfig(level=logging.INFO,format="%(filename)s[line:%(lineno)d] %(levelname)s %(message)s")
#logger=logging.getLogger(__name__)
import sys
import torch
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import torch.utils.data.dataloader as Data
from advbox.adversary import Adversary
from advbox.attacks.cw2_pytorch import CW_L2_Attack
from advbox.models.pytorch import PytorchModel
from tutorials.mnist_model_pytorch import Net
TOTAL_NUM = 100
pretrained_model="tutorials/mnist-pytorch/net.pth"
loss_func = torch.nn.CrossEntropyLoss()
#使用MNIST测试数据集 随机挑选TOTAL_NUM个
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('tutorials/mnist-pytorch/data', train=False, download=True, transform=transforms.Compose([
transforms.ToTensor(),
])),
batch_size=1, shuffle=True)
# Define what device we are using
logging.info("CUDA Available: {}".format(torch.cuda.is_available()))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#这里有个需要注意的地方 cw的输出必须是logit层而不能是softmax层 否则极大概率梯度消失一直无法收敛
# Initialize the network
model = Net().to(device)
# Load the pretrained model
model.load_state_dict(torch.load(pretrained_model, map_location='cpu'))
# Set the model in evaluation mode. In this case this is for the Dropout layers
model.eval()
# advbox demo
m = PytorchModel(
model, loss_func,(0.0, 1.0),
channel_axis=1)
#实例化CW_L2_Attack
attack = CW_L2_Attack(m)
#设置分类数num_labels 最大迭代次数max_iterations 二分查找次数 binary_search_steps C初始化值 initial_const
attack_config = {"num_labels": 10,"max_iterations":1000,"binary_search_steps":4,"initial_const":100.0}
# use test data to generate adversarial examples
total_count = 0
fooling_count = 0
for i, data in enumerate(test_loader):
inputs, labels = data
inputs, labels=inputs.numpy(),labels.numpy()
total_count += 1
adversary = Adversary(inputs, labels[0])
# FGSM non-targeted attack
adversary = attack(adversary, **attack_config)
if adversary.is_successful():
fooling_count += 1
print(
'attack success, original_label=%d, adversarial_label=%d, count=%d'
% (labels, adversary.adversarial_label, total_count))
else:
print('attack failed, original_label=%d, count=%d' %
(labels, total_count))
if total_count >= TOTAL_NUM:
print(
"[TEST_DATASET]: fooling_count=%d, total_count=%d, fooling_rate=%f"
% (fooling_count, total_count,
float(fooling_count) / total_count))
break
print("cw2 attack done")
###Output
cuda
attack success, original_label=9, adversarial_label=4, count=1
attack success, original_label=3, adversarial_label=2, count=2
attack success, original_label=1, adversarial_label=7, count=3
attack success, original_label=5, adversarial_label=3, count=4
attack success, original_label=0, adversarial_label=2, count=5
attack success, original_label=6, adversarial_label=4, count=6
attack success, original_label=2, adversarial_label=8, count=7
attack success, original_label=1, adversarial_label=4, count=8
attack success, original_label=8, adversarial_label=9, count=9
attack success, original_label=1, adversarial_label=6, count=10
attack success, original_label=2, adversarial_label=1, count=11
attack success, original_label=4, adversarial_label=9, count=12
attack success, original_label=8, adversarial_label=3, count=13
attack success, original_label=7, adversarial_label=9, count=14
attack success, original_label=6, adversarial_label=5, count=15
attack success, original_label=1, adversarial_label=4, count=16
attack success, original_label=5, adversarial_label=9, count=17
attack success, original_label=7, adversarial_label=2, count=18
attack success, original_label=8, adversarial_label=2, count=19
attack success, original_label=1, adversarial_label=4, count=20
attack success, original_label=1, adversarial_label=4, count=21
attack success, original_label=4, adversarial_label=7, count=22
attack success, original_label=7, adversarial_label=3, count=23
attack success, original_label=9, adversarial_label=4, count=24
attack success, original_label=7, adversarial_label=9, count=25
attack success, original_label=6, adversarial_label=5, count=26
attack success, original_label=3, adversarial_label=9, count=27
attack success, original_label=1, adversarial_label=4, count=28
attack success, original_label=2, adversarial_label=7, count=29
attack success, original_label=1, adversarial_label=7, count=30
attack success, original_label=0, adversarial_label=9, count=31
attack success, original_label=7, adversarial_label=3, count=32
attack success, original_label=7, adversarial_label=3, count=33
attack success, original_label=8, adversarial_label=2, count=34
attack success, original_label=5, adversarial_label=3, count=35
attack success, original_label=0, adversarial_label=6, count=36
attack success, original_label=9, adversarial_label=4, count=37
attack success, original_label=4, adversarial_label=9, count=38
attack success, original_label=1, adversarial_label=4, count=39
attack success, original_label=5, adversarial_label=3, count=40
attack success, original_label=6, adversarial_label=5, count=41
attack success, original_label=0, adversarial_label=6, count=42
attack success, original_label=4, adversarial_label=9, count=43
attack success, original_label=6, adversarial_label=5, count=44
attack success, original_label=6, adversarial_label=0, count=45
attack success, original_label=4, adversarial_label=9, count=46
attack success, original_label=9, adversarial_label=4, count=47
attack success, original_label=3, adversarial_label=7, count=48
attack success, original_label=5, adversarial_label=3, count=49
attack success, original_label=8, adversarial_label=9, count=50
attack success, original_label=4, adversarial_label=9, count=51
attack success, original_label=2, adversarial_label=7, count=52
attack success, original_label=8, adversarial_label=2, count=53
attack success, original_label=2, adversarial_label=3, count=54
attack success, original_label=0, adversarial_label=6, count=55
attack success, original_label=4, adversarial_label=2, count=56
attack success, original_label=5, adversarial_label=8, count=57
attack success, original_label=3, adversarial_label=5, count=58
attack success, original_label=3, adversarial_label=5, count=59
attack success, original_label=3, adversarial_label=8, count=60
attack success, original_label=1, adversarial_label=4, count=61
attack success, original_label=1, adversarial_label=4, count=62
attack success, original_label=9, adversarial_label=4, count=63
attack success, original_label=4, adversarial_label=8, count=64
attack success, original_label=2, adversarial_label=3, count=65
attack success, original_label=6, adversarial_label=0, count=66
attack success, original_label=1, adversarial_label=6, count=67
attack success, original_label=5, adversarial_label=3, count=68
attack success, original_label=4, adversarial_label=6, count=69
attack success, original_label=2, adversarial_label=3, count=70
attack success, original_label=0, adversarial_label=2, count=71
attack success, original_label=1, adversarial_label=7, count=72
attack success, original_label=7, adversarial_label=9, count=73
attack success, original_label=7, adversarial_label=9, count=74
attack success, original_label=8, adversarial_label=2, count=75
attack success, original_label=1, adversarial_label=4, count=76
attack success, original_label=4, adversarial_label=8, count=77
attack success, original_label=1, adversarial_label=4, count=78
attack success, original_label=2, adversarial_label=7, count=79
attack success, original_label=6, adversarial_label=2, count=80
attack success, original_label=9, adversarial_label=4, count=81
attack success, original_label=0, adversarial_label=2, count=82
attack success, original_label=2, adversarial_label=8, count=83
attack success, original_label=5, adversarial_label=3, count=84
attack success, original_label=8, adversarial_label=5, count=85
attack success, original_label=1, adversarial_label=4, count=86
attack success, original_label=4, adversarial_label=9, count=87
attack success, original_label=0, adversarial_label=5, count=88
attack success, original_label=1, adversarial_label=6, count=89
attack success, original_label=0, adversarial_label=8, count=90
attack success, original_label=2, adversarial_label=3, count=91
attack success, original_label=4, adversarial_label=8, count=92
attack success, original_label=8, adversarial_label=3, count=93
attack success, original_label=1, adversarial_label=4, count=94
attack success, original_label=0, adversarial_label=9, count=95
attack success, original_label=1, adversarial_label=4, count=96
attack success, original_label=5, adversarial_label=3, count=97
attack success, original_label=7, adversarial_label=1, count=98
attack success, original_label=1, adversarial_label=7, count=99
attack success, original_label=9, adversarial_label=4, count=100
[TEST_DATASET]: fooling_count=100, total_count=100, fooling_rate=1.000000
cw2 attack done
|
Copia_de_regression_pyspark.ipynb | ###Markdown
###Code
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q https://dlcdn.apache.org/spark/spark-3.2.0/spark-3.2.0-bin-hadoop3.2.tgz
import os
print(os.getcwd())
!tar xf /content/spark-3.2.0-bin-hadoop3.2.tgz
!pip install -q findspark
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-3.2.0-bin-hadoop3.2"
import findspark
findspark.init()
findspark.find()
from pyspark.sql import SparkSession
spark= SparkSession \
.builder \
.appName("Our First Spark example") \
.getOrCreate()
import pandas as pd
from google.colab import drive
drive.mount('/content/drive')
#df1 = pd.read_csv('/content/drive/MyDrive/heart.csv')
housing=spark.read.option("InferSchema",'true').csv("/content/drive/MyDrive/housing2.csv", header=True)
housing.printSchema()
#columns of dataframe
housing.columns
#create sparksession object
from pyspark.sql import SparkSession
spark=SparkSession.builder.appName('lin_reg').getOrCreate()
#import Linear Regression from spark's MLlib
from pyspark.ml.regression import LinearRegression
#shape of dataset
print((housing.count(),len(housing.columns)))
#printSchema
housing.printSchema()
#view statistical measures of data
housing.describe().show(5,False)
###Output
+-------+-------------------+-----------------+------------------+------------------+------------------+------------------+-----------------+------------------+------------------+---------------+
|summary|longitude |latitude |housing_median_age|total_rooms |total_bedrooms |population |households |median_income |median_house_value|ocean_proximity|
+-------+-------------------+-----------------+------------------+------------------+------------------+------------------+-----------------+------------------+------------------+---------------+
|count |20640 |20640 |20640 |20640 |20433 |20640 |20640 |20640 |20640 |20640 |
|mean |-119.56970445736148|35.6318614341087 |28.639486434108527|2635.7630813953488|537.8705525375618 |1425.4767441860465|499.5396802325581|3.8706710029070246|206855.81690891474|null |
|stddev |2.003531723502584 |2.135952397457101|12.58555761211163 |2181.6152515827944|421.38507007403115|1132.46212176534 |382.3297528316098|1.899821717945263 |115395.61587441359|null |
|min |-124.35 |32.54 |1.0 |2.0 |1.0 |3.0 |1.0 |0.4999 |14999.0 |<1H OCEAN |
|max |-114.31 |41.95 |52.0 |39320.0 |6445.0 |35682.0 |6082.0 |15.0001 |500001.0 |NEAR OCEAN |
+-------+-------------------+-----------------+------------------+------------------+------------------+------------------+-----------------+------------------+------------------+---------------+
###Markdown
url = 'copied_raw_GH_link'df1 = pd.read_csv(url)
###Code
#sneak into the dataset
housing.head(3)
for i in housing.columns:
print(i)
housing.filter(housing[i].isNull()).show()
housing_w_nulls=housing.filter(housing.total_bedrooms.isNotNull())
housing_w_nulls=housing_w_nulls.drop("ocean_proximity")
#import corr function from pyspark functions
from pyspark.sql.functions import corr
# check for correlation
housing_w_nulls.select(corr('total_rooms','median_house_value')).show()
housing_w_nulls.withColumn("longitude",(housing_w_nulls["longitude"]+200)).show(10,False)
#printSchema
housing_w_nulls.printSchema()
#import vectorassembler to create dense vectors
from pyspark.ml.linalg import Vector
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import MinMaxScaler
from pyspark.ml import Pipeline
columns_to_scale = [x for x in housing_w_nulls.columns]
assemblers = [VectorAssembler(inputCols=[col], outputCol=col + "_vec") for col in columns_to_scale]
scalers = [MinMaxScaler(inputCol=col + "_vec", outputCol=col + "_scaled") for col in columns_to_scale]
pipeline = Pipeline(stages=assemblers + scalers)
scalerModel = pipeline.fit(housing_w_nulls)
scaledData = scalerModel.transform(housing_w_nulls)
scaledData.columns
scaledData=scaledData.select(['longitude_scaled','latitude_scaled','housing_median_age_scaled','total_rooms_scaled','total_bedrooms_scaled','population_scaled','households_scaled', 'median_house_value'])
#import vectorassembler to create dense vectors
#create the vector assembler
vec_assmebler=VectorAssembler(inputCols=['longitude_scaled','latitude_scaled','housing_median_age_scaled','total_rooms_scaled','total_bedrooms_scaled','population_scaled','households_scaled'],outputCol='features')
#transform the values
features_df=vec_assmebler.transform(scaledData)
#validate the presence of dense vectors
features_df.printSchema()
#view the details of dense vector
features_df.select('features').show(5,False)
#create data containing input features and output column
model_df=features_df.select('features','median_house_value')
model_df.show(5,False)
#size of model df
print((model_df.count(), len(model_df.columns)))
#split the data into 70/30 ratio for train test purpose
train_df,test_df=model_df.randomSplit([0.7,0.3])
print((train_df.count(), len(train_df.columns)))
print((test_df.count(), len(test_df.columns)))
train_df.describe().show()
#Build Linear Regression model
lin_Reg=LinearRegression(labelCol='median_house_value')
#fit the linear regression model on training data set
lr_model=lin_Reg.fit(train_df)
lr_model.intercept
print(lr_model.coefficients)
training_predictions=lr_model.evaluate(train_df)
training_predictions.meanSquaredError
training_predictions.r2
#make predictions on test data
test_results=lr_model.evaluate(test_df)
#view the residual errors based on predictions
test_results.residuals.show(10)
#coefficient of determination value for model
test_results.r2
test_results.rootMeanSquaredError
test_results.meanSquaredError
###Output
_____no_output_____ |
anpcp/iconstructive.ipynb | ###Markdown
Constructive heuristics comparison The objective of the $\alpha$-neighbor $p$-center problem can be thought of as "equally" distributing the facilities among the clients to cover them efficiently.
This is the actual goal of the $p$-dispersion problem, so a constructive heuristic that uses its objective function will be tested and compared against a greedy heuristic that takes into account the objective function of this problem. There will be used 20 random instances of size $n = 50$, $p = 5$ and other 20 of size $n = 400$, $p = 20$, and each one will be tested with both $\alpha = 2$ and $\alpha = 3$. The coordinates of the points are between 0 and 1000 for both planes.
###Code
from copy import deepcopy
from typing import List
from models.instance import Instance
def generate_instances(amount: int, n: int, p: int) -> List[Instance]:
alpha2 = [
Instance.random(n, p, 2, 1000, 1000)
for _ in range(amount)
]
alpha3 = deepcopy(alpha2)
for i in alpha3:
i.alpha = 3
return alpha2 + alpha3
instances = generate_instances(20, 50, 5) + generate_instances(20, 400, 20)
###Output
_____no_output_____
###Markdown
We will use the following code to measure the time taken by the evaluations and the objective function results, formatted in a Pandas DataFrame.
###Code
import timeit
import pandas as pd
from heuristics.constructive import pdp_based, greedy
from utils import eval_obj_func
def measure(instance, heuristic):
start = timeit.default_timer()
solution = heuristic(instance)
time = timeit.default_timer() - start
of = eval_obj_func(instance, solution)
return heuristic.__name__, solution, of, time
def get_dataframe(data):
return pd.DataFrame({
colname: [d[i] for d in data]
for colname, i in zip(
('n', 'p', 'a', 'heuristic', 'solution', 'OF', 'seconds'),
range(len(data[0])))
})
###Output
_____no_output_____
###Markdown
Comparing data Let's create the dataframe of PDP-based evaluations, or get it from a file if it already exists:
###Code
import os
OUT_FOLDER = 'nb_results\\constructive'
filepath = os.path.join(OUT_FOLDER, 'pdp_df.csv')
if os.path.exists(filepath):
pdp_df = pd.read_csv(filepath)
else:
pdp_data = [(*i.get_parameters(), *measure(i, pdp_based)) for i in instances]
pdp_df = get_dataframe(pdp_data)
pdp_df.to_csv(filepath, index=False)
pdp_df
###Output
_____no_output_____
###Markdown
Now the dataframe of the greedy results:
###Code
filepath = os.path.join(OUT_FOLDER, 'greedy_df.csv')
if os.path.exists(filepath):
greedy_df = pd.read_csv(filepath)
else:
greedy_data = [(*i.get_parameters(), *measure(i, greedy)) for i in instances]
greedy_df = get_dataframe(greedy_data)
greedy_df.to_csv(filepath, index=False)
greedy_df
###Output
_____no_output_____
###Markdown
We now have 2 dataframes, one for each heuristic. Let's filter them by $n$ and $\alpha$ too:
###Code
filtered_data = {
heuristic: {
f'n{n}': {
f'a{alpha}': df[
(df['n'] == n) &
(df['a'] == alpha)
].iloc[:, [0, 1, 2, 3, 5, 6]]
for alpha in (2, 3)
}
for n in (50, 400)
}
for heuristic, df in (('pdp', pdp_df), ('greedy', greedy_df))
}
###Output
_____no_output_____
###Markdown
Now we can access the data by using keys referring to the heuristic, its size $n$ and $\alpha$:
###Code
filtered_data['pdp']['n50']['a2']
###Output
_____no_output_____
###Markdown
To calculate some basic statistics about the data, let's create a function that will take parameters $n$ and $\alpha$ to compare the results between the 2 heuristics:
###Code
def calc_stats(n, a):
ncol = f'n{n}'
acol = f'a{a}'
stats = (filtered_data['pdp'][ncol][acol]
.compare(filtered_data['greedy'][ncol][acol], keep_equal=True)
.rename(columns={ 'self': 'pdp', 'other': 'greedy' })
.drop(columns='heuristic'))
stats['OF', 'absolute'] = stats['OF', 'pdp'] - stats['OF', 'greedy']
stats['OF', '%'] = (stats['OF', 'absolute'] / stats['OF', 'pdp']) * 100
order = ['pdp', 'greedy', 'absolute', '%']
tops = ('OF', 'seconds')
stats = stats.loc[:, (tops, order)]
winnings = [
stats[stats['OF', 'absolute'] <= 0].count()[0],
stats[stats['OF', 'absolute'] > 0].count()[0],
'', '', '', ''
]
average = [
stats[col].mean()
for col in stats.columns
]
stats.loc['winnings'] = winnings
stats.loc['average'] = average
return stats
from itertools import product
from IPython.display import display
for n, a in product((50, 400), (2, 3)):
print('------------------------')
print(f'n = {n}, alpha = {a}')
stats = calc_stats(n, a)
filepath = os.path.join(OUT_FOLDER, f'stats_n{n}_a{a}.csv')
stats.to_csv(filepath)
display(stats)
###Output
------------------------
n = 50, alpha = 2
|
Day32/Day32_yolo_prediction_HW.ipynb | ###Markdown
作業相信今天的簡報還有程式碼範例的信息量已經足夠大了,可以多花時間消化簡報和程式碼範例,作業就問一個非常簡單的問題來確保大家有理解到重點,請問在 YOLOv1 的設計下,一張圖片最多能檢測出多少個 bbox 呢? 範例今天的課程大家應該可以了解到 YOLO 的網路輸出是一個 7x7x30 的 tensor,而今天的程式碼範例的目標是讓大家由程式碼直觀地去理解,一張圖片經過 YOLO 網路之後,這個 7x7x30 的 tensor 裡面的值應該是什麼樣子?
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
img = cv2.imread("dog.jpg") # 讀取範例圖片
h, w, _ = img.shape
###Output
_____no_output_____
###Markdown
顯示一下今天的範例圖,可以觀察到圖片裡面有 狗、腳踏車和一輛車
###Code
def show(img):
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) # plt.imshow 預設圖片是 rgb 的
plt.show()
show(img)
###Output
_____no_output_____
###Markdown
假設我們知道這張中汽車、狗和腳踏車的 bboxes 訊息
###Code
boxes = np.array([[128, 224, 314, 537], [475, 85, 689, 170], [162, 119, 565, 441]]).astype(float)
# 把 bboxes 坐標以原圖的 resolution normalize 到 0~1 之間
boxes[:, [0, 2]] = boxes[:, [0, 2]] / img.shape[1]
boxes[:, [1, 3]] = boxes[:, [1, 3]] / img.shape[0]
img_show = img.copy()
for x1, y1, x2, y2 in boxes:
cv2.rectangle(img_show, (int(x1*w), int(y1*h)), (int(x2*w), int(y2*h)), (0, 255, 0), 2)
show(img_show)
###Output
_____no_output_____
###Markdown
假設我們用的是 VOC 資料集, 這是一個標註有 20 類物體的資料集, 介紹可以看[這邊](https://arleyzhang.github.io/articles/1dc20586/), 汽車、狗和腳踏車所對應的 class index 分別是 1, 7, 16
###Code
labels = np.array([1, 7, 16])
###Output
_____no_output_____
###Markdown
有了 bboxes 以及類別的信息,接下來我們就來建構這張圖片在經過 YOLO 網路後應該要輸出的 tensor
###Code
grid_num = 7 # 把圖片切成 7x7 的網格
target = np.zeros((grid_num, grid_num, 30)) # 初始化 YOLO 目標預測值, 30 代表什麼, 請參考簡報
print("YOLO 網路輸出 tensor 的 shape: ", target.shape)
# 主要建構邏輯
cell_size = 1./grid_num # 一個網格的大小
wh = boxes[:,2:]-boxes[:,:2] # bboxes 的 width 以及 height
cxcy = (boxes[:,2:]+boxes[:,:2])/2 # bboxes 的中心
for i in range(len(boxes)):
cxcy_sample = cxcy[i]
ij = np.ceil((cxcy_sample/cell_size))-1 # bboxes 中心所坐落的網路 index
target[int(ij[1]),int(ij[0]),4] = 1 # 該網格第一個 box 的 confidence 信息
target[int(ij[1]),int(ij[0]),9] = 1 # 該網格第二個 box 的 confidence 信息
target[int(ij[1]),int(ij[0]),int(labels[i])+9] = 1 # 該網格的類別信息
xy = ij*cell_size # 匹配到的網格的左上角相對坐標
# 該網格的 x,y,w,h
delta_xy = (cxcy_sample -xy)/cell_size
target[int(ij[1]),int(ij[0]),2:4] = wh[i]
target[int(ij[1]),int(ij[0]),:2] = delta_xy
target[int(ij[1]),int(ij[0]),7:9] = wh[i]
target[int(ij[1]),int(ij[0]),5:7] = delta_xy
print("顯示 7x7x30 中,第一個 box 的 confidence 信息\n", target[:, :, 4])
###Output
顯示 7x7x30 中,第一個 box 的 confidence 信息
[[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0. 0.]
[0. 0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0.]]
###Markdown
上面的 tensor 中, 為 1 的表示那個網格是有物體的, 為 0 就表示沒有物體我們可以把帶有 7x7 網格的圖畫出來比對一下,看看汽車、狗和腳踏車的中心是不是恰好就是這個 tensor 中值為 1 的網格呢
###Code
for h in np.arange(0, img.shape[0], img.shape[0]/grid_num).astype(int):
cv2.line(img_show, (0, h), (img.shape[1], h), (0, 0, 0), 2)
for w in np.arange(0, img.shape[1], img.shape[1]/grid_num).astype(int):
cv2.line(img_show, (w, 0), (w, img.shape[0]), (0, 0, 0), 2)
show(img_show)
###Output
_____no_output_____ |
1_Data_Prep_vf.ipynb | ###Markdown
**Etude de Marché - Exportation de poulet à l'international** Partie 1 - Nettoyage et préparation des données. **DATA SOURCES**http://www.fao.org/faostat/en/data***Population par pays en 2013 et 2017***- Annual population > Population Est. & Proj > Total Both sexes > 2013, 2017***Bilan alimentaire par pays en 2013***- Food Balance Sheets > vegetal products, animal products > Food supply quantity, food supply, protein supply quantity > 2013***Situation economique des pays en 2013***- Suite of Food Security Indicators > Gross domestic product per capita, Political stability > 2013 Tools **Librairies**
###Code
# System
from pathlib import Path
# Basic
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
**Data**
###Code
# Dataset sur les populations annuelles en 2013 et 2017
data_pop = Path.cwd() / "data" / "raw" / "pop2.csv"
# Dataset sur la disponibilité alimentaire des pays
data_anim = Path.cwd() / "data" / "raw" / "anim.csv" # produits animaliers
data_vg = Path.cwd() / "data" / "raw" / "vg.csv" # produits végétaux
# Dataset sur l'économie locale des pays
data_eco = Path.cwd() / "data" / "raw" / "eco.csv"
###Output
_____no_output_____
###Markdown
Data Population *Table présentant les populations par pays en 2010 et 2013. Il existe des données disponibles après 2013, mais pour comparer de façon pertinente nos données provenant de différentes tables, on ne travaillera pas au-delà de 2013.* Nomenclature & Formatage colonnes
###Code
# Affichage des données
pop = pd.read_csv(data_pop)
pop.head()
# Sélection des colonnes
pop = pop.loc[:,['Area Code', 'Area','Year', 'Value']]
# Renommage des colonnes
pop = pop.rename(index=str, columns={"Area Code":'Code_c',"Area": "Country", 'Value': 'Population'})
# Formatage de la colonne population en unité de personnes
pop['Population'] = (pop['Population'] * 1000).round(0)
pop.head()
# Tableau croisé dynamique sur data de population par années
pop = pop.pivot_table(
index=['Code_c',"Country"],
columns = ["Year"], values=["Population"]).reset_index()
# Renommage final
pop.columns = ['Code_c',"Country",'pop2013','pop2017']
pop.head()
###Output
_____no_output_____
###Markdown
Valeurs manquantes
###Code
# Combien de valeurs manquantes dans le dataframe ?
pop.isnull().sum()
###Output
_____no_output_____
###Markdown
Création de colonnes
###Code
# Création colonne 'Variation population'
pop['Var_pop (%)'] = ((pop['pop2017']-pop['pop2013'])/pop['pop2017']).map(lambda x : "%e"%x)
pop.head()
###Output
_____no_output_____
###Markdown
Valeurs incohérentes
###Code
# Population mondiale
pop['pop2013'].sum()
pop['pop2017'].sum()
###Output
_____no_output_____
###Markdown
Tip: Résultats cohérent aux données officielles. http://www.worldometers.info/fr/https://www.ined.fr/fichier/s_rubrique/18709/population_societes_2013_503_population_monde.fr.pdf Data Alimentation *Table présentant la disponibilité alimentaire des pays en 2013. Pas de données disponibles après 2013.* Dataset originaux
###Code
# Affichage des données sur produits animaliers
meat = pd.read_csv(data_anim)
meat.head()
# Affichage des données sur produits végétaux
vg = pd.read_csv(data_vg)
vg.head()
# Création de colonnes 'origine' dans les 2 dataset
meat["Origin"] = "animal"
vg["Origin"] = "vegetal"
# On regroupe vg et meat en un unique dataframe, via une union
food = meat.append(vg)
# Suppression des 2 dataset vg et meat
del meat, vg
food.head()
###Output
_____no_output_____
###Markdown
Nomenclature et Formatage
###Code
# Renommage des colonnes
food = food.rename(index=str, columns={"Country Code":'Code_c'})
# Création tableau croisé dynamique
food = food.pivot_table(
index=['Code_c',"Country","Year", "Origin","Item"],
columns = ["Element"], values=["Value"], aggfunc=sum)
food.head()
# Renommage colonnes
food.columns = ["FS (kcal/pers/d)",'FS (kg/pers/d)', 'ProtS (g/pers/d)']
###Output
_____no_output_____
###Markdown
Sélection des données
###Code
# Aggrégation par pays / origine / années
food = food.reset_index()
food = food.drop(columns='Year') # Suppression de la colonne Year (unique)
food = food.groupby(['Code_c','Country','Origin']).sum().reset_index()
food.head()
###Output
_____no_output_____
###Markdown
Valeurs manquantes
###Code
# Combien de valeurs manquantes dans le dataframe ?
food.isnull().sum()
###Output
_____no_output_____
###Markdown
Création de colonnes
###Code
# Tableau croisé dynamique sur protéines
temp = food.pivot_table(
index=['Code_c',"Country"],
columns = ["Origin"], values=["ProtS (g/pers/d)"], aggfunc=sum)
temp.columns = ['ProtA (g/pers/d)', 'ProtVG (g/pers/d)'] # renommage des colonnes en fonction de l'origine Animal ou Vegetal
temp = temp.reset_index()
temp.head()
# Création colonnes Proportion de protéines d'origine animale / quantité totale de protéines
temp['%ProtA'] = (temp['ProtA (g/pers/d)'] / (temp['ProtA (g/pers/d)'] + temp['ProtVG (g/pers/d)'])).map(lambda x : "%e"%x)
temp.head()
###Output
_____no_output_____
###Markdown
Table Food intermédiaire
###Code
# Aggrégation sur pays et années
food = food.groupby(['Code_c','Country']).sum().reset_index()
food.head()
# Jointure interne
food = pd.merge(food, temp[['Code_c','Country','%ProtA']], on = ['Code_c','Country'], how = 'outer')
food.head()
###Output
_____no_output_____
###Markdown
Jointure externe pour garder toutes les données des 2 dataframes. Data Economie *Data sur le PIB / habitants et l'indice de stabilité des pays en 2013.* Nomenclature & Formatage colonnes
###Code
# Affichage des données sur les échanges commerciaux
eco = pd.read_csv(data_eco)
eco.head()
# Renommage des colonnes
eco.columns = ["xx","xx2",'Code_c',"Country",'xx3','xxx'
,'xxx','Item','xx4',"Year","unit","value",'xx5','xx6']
# Création tableau croisé dynamique
eco = eco.pivot_table(
index=['Code_c',"Country"],
columns = ["Item"], values=["value"], aggfunc=sum)
eco.head()
# Renommage des colonnes
eco.columns = ['PIB$/hab', 'Ix_stab']
eco = eco.reset_index()
eco.head()
###Output
_____no_output_____
###Markdown
Valeurs manquantes
###Code
# Affichage des valeurs manquantes sur PIB
eco.isnull().sum()
eco.loc[eco['Ix_stab'].isnull()]
eco.loc[eco['PIB$/hab'].isnull()]
###Output
_____no_output_____
###Markdown
Tip: - Pas de valeurs pour la Chine en termes de stabilité : pas vraiment surprenant.- Il manque certaines valeurs pour le PIB/habitants : cela peut poser pb pour notre clustering par la suite. Table Economie intermédiaire
###Code
# Table finale sur l'économie des pays
eco.head()
###Output
_____no_output_____
###Markdown
Table finale Jointures
###Code
# Jointure des dataframes population + food
data = pd.merge(food, pop[['Code_c','Country','pop2013','Var_pop (%)']], on=['Code_c', 'Country'], how='outer')
data.head()
# Jointure du dataframe avec le dataframe Economie
data = pd.merge(data, eco , on=['Code_c','Country'], how='outer')
data.head()
###Output
_____no_output_____
###Markdown
Format
###Code
# Format final
data.info()
# Reformatage des colonnes % proteines et variation de la population en %
data['%ProtA'] = data['%ProtA'].astype(float)
data['Var_pop (%)'] = data['Var_pop (%)'].astype(float)
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 228 entries, 0 to 227
Data columns (total 10 columns):
Code_c 228 non-null int64
Country 228 non-null object
FS (kcal/pers/d) 171 non-null float64
FS (kg/pers/d) 171 non-null float64
ProtS (g/pers/d) 171 non-null float64
%ProtA 171 non-null float64
pop2013 227 non-null float64
Var_pop (%) 227 non-null float64
PIB$/hab 185 non-null float64
Ix_stab 193 non-null float64
dtypes: float64(8), int64(1), object(1)
memory usage: 19.6+ KB
###Markdown
NaN Table de base
###Code
# Valeurs manquantes
data.isnull().sum()
data.loc[data['FS (kcal/pers/d)'].isnull()].sort_values(by='pop2013', ascending = True)
###Output
_____no_output_____
###Markdown
Tip: - Les données manquantes sur les ressources alimentaires concernent en majorité des petits pays dont la population est inférieure à 1 millions d'habitants.- Sur ces pays toutes les informations concernant les ressources alimentaires sont manquantes.La question est de savoir comment les gérer ?- Suppression.- Imputation par la moyenne ou la médiane.- Remplacement par zéro.Vu qu'il nous manque toutes les informations concernant les ressources alimentaires qui sont cruciales pour notre analyse, je choisis de les supprimer.
###Code
data = data.loc[~data['FS (kcal/pers/d)'].isnull()]
# Focus sur les valeurs NAN sur PIB et index de stabilité
data.loc[(data['PIB$/hab'].isnull()) | (data['Ix_stab'].isnull())]
###Output
_____no_output_____
###Markdown
Tip: Je garde la même logique que précédemment en ne conservant par les états dont la population est inférieure à 1 millions d'habitants.
###Code
data = data.loc[data['pop2013']>1000000]
data.loc[(data['PIB$/hab'].isnull()) | (data['Ix_stab'].isnull())]
###Output
_____no_output_____
###Markdown
Tip: Je ne garde pas Cuba et la Corée du Nord qui sont relativement 'fermés' donc peu favorables au dévelopement d'un marché.
###Code
data = data.drop([31,80])
data.loc[(data['PIB$/hab'].isnull()) | (data['Ix_stab'].isnull())]
###Output
_____no_output_____
###Markdown
Tip: Je remplace les valeurs restantes par les valeurs trouvées sur internet ou par 0 pour l'index de stabilité de la Chine. Ce dernier paramètre n'étant pas utilisé pour le clustering cela n'est pas génant.
###Code
data.loc[102,'PIB$/hab'] = 9578
data[data.Country == 'Namibia']
data = data.fillna(data.mean())
###Output
_____no_output_____
###Markdown
Tip: - Si je remplace les valeurs manquantes je vais biaiser le clustering. - Si je supprime ces valeurs pour quelques données manquantes je vais perdre des candidats potentiels avec des niveaux de population important. Par ailleurs, ces données présentent des valeurs extrêmes importantes qui vont influencer le clustering en créant des groupes de quelques individus.**Je choisis de réaliser le clustering sur les données des ressources alimentaires (variable sélective) et je garde les autres données comme variables illustratives des groupes qui seront formés.**Je peux donc imputer les valeurs manquantes par la moyenne sans craindre que cela biaise le clustering.
###Code
data.isnull().sum()
###Output
_____no_output_____
###Markdown
BackUp
###Code
# Sauvegarde des données nettoyées
pop.to_csv(Path.cwd() / "data" / "interim" / "pop_prep.csv", encoding='utf-8',index=False)
food.to_csv(Path.cwd() / "data" / "interim" / "food_prep.csv", encoding='utf-8',index=False)
eco.to_csv(Path.cwd() / "data" / "interim" / "eco_prep.csv", encoding='utf-8',index=False)
# Sauvegarde data finales
data.to_csv(Path.cwd() / "data" / "processed" / "data_prep.csv", encoding='utf-8',index=False)
###Output
_____no_output_____ |
codici/old/Untitled1.ipynb | ###Markdown

###Code
fig = plt.figure(figsize=(16, 4))
dist = stats.norm()
ax = fig.add_subplot(121)
x = np.linspace(-5,5, 100)
ax.plot(x, dist.pdf(x))
plt.title('Normal')
ax = fig.add_subplot(122)
dist1 = stats.expon()
x = np.linspace(0,5, 100)
ax.plot(x, dist1.pdf(x))
plt.title('Exponential')
plt.show()
with pm.Model() as model:
# a priori
sigma = pm.Exponential('sigma', lam=10)
theta_0 = pm.Normal('theta_0', mu=0, sd=20)
theta_1 = pm.Normal('theta_1', mu=0, sd=20)
# likelihood
likelihood = pm.Normal('y', mu=theta_0+theta_1*x, sd=sigma, observed=y)
trace = pm.sample(3000)
plt.figure(figsize=(16,8))
pm.traceplot(trace[100:], varnames=['theta_0'], lines={'theta_0':true_intercept})
plt.tight_layout()
plt.show()
fig = plt.figure(figsize=(12,4))
ax = sns.distplot(trace['theta_0'], color=colors[0])
ax.axvline(true_intercept, color=colors[1], label='True value')
plt.title(r'$p(\theta_0)$', fontsize=16)
plt.legend()
plt.show()
plt.figure(figsize=(16,8))
pm.traceplot(trace[100:], varnames=['theta_1'], lines={'theta_1':true_slope})
plt.tight_layout()
plt.show()
fig = plt.figure(figsize=(12,4))
ax = sns.distplot(trace['theta_1'], color=colors[0])
ax.axvline(true_slope, color=colors[1], label='True value')
plt.title(r'$p(\theta_1)$', fontsize=16)
plt.legend()
plt.show()
plt.figure(figsize=(16,8))
pm.traceplot(trace[100:], varnames=['sigma'])
plt.tight_layout()
plt.show()
fig = plt.figure(figsize=(12,4))
ax = sns.distplot(trace['sigma'], color=colors[0])
plt.title(r'$p(\sigma)$', fontsize=16)
plt.show()
plt.figure(figsize=(16, 10))
plt.scatter(x, y, marker='x', color=colors[0],label='sampled data')
t0 = []
t1 = []
for i in range(100):
ndx = np.random.randint(0, len(trace))
theta_0, theta_1 = trace[ndx]['theta_0'], trace[ndx]['theta_1']
t0.append(theta_0)
t1.append(theta_1)
p = theta_0+theta_1*x
plt.plot(x, p, c=colors[3], alpha=.1)
plt.plot(x, true_regression_line, color=colors[1], label='true regression line', lw=3.)
theta_0_mean = np.array(t0).mean()
theta_1_mean = np.array(t1).mean()
plt.plot(x, theta_0_mean+theta_1_mean*x, color=colors[8], label='average regression line', lw=3.)
plt.xlabel('x', fontsize=12)
plt.ylabel('y', fontsize=12)
plt.title('Posterior predictive regression lines', fontsize=16)
plt.legend(loc=0, fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown

###Code
fig = plt.figure(figsize=(16, 4))
dist = stats.halfcauchy()
ax = fig.add_subplot(121)
x = np.linspace(0,5, 100)
ax.plot(x, dist.pdf(x), color=colors[1], label='Half Cauchy')
ax.plot(x, stats.expon.pdf(x), label='Exponential')
plt.legend()
ax = fig.add_subplot(122)
dist1 = stats.t(2)
x = np.linspace(-5,5, 100)
ax.plot(x, dist1.pdf(x), color=colors[1], label='Student')
ax.plot(x, stats.norm.pdf(x),label='Gaussian')
plt.legend()
plt.show()
with pm.Model() as model_1:
# a priori
sigma = pm.HalfCauchy('sigma', beta=1)
theta_0 = pm.Normal('theta_0', mu=0, sd=20)
theta_1 = pm.Normal('theta_1', mu=0, sd=20)
# likelihood
likelihood = pm.StudentT('y', mu=theta_0+theta_1*x, sd=sigma, nu=1.0, observed=y)
trace_1 = pm.sample(3000)
plt.figure(figsize=(16,8))
pm.traceplot(trace_1[100:], varnames=['theta_0'], lines={'theta_0':true_intercept})
plt.tight_layout()
plt.show()
fig = plt.figure(figsize=(12,4))
ax = sns.distplot(trace_1['theta_0'], color=colors[0])
ax.axvline(true_intercept, color=colors[1], label='True value')
plt.title(r'$p(\theta_0)$', fontsize=16)
plt.legend()
plt.show()
fig = plt.figure(figsize=(12,4))
ax = sns.distplot(trace_1['theta_1'], color=colors[0])
ax.axvline(true_slope, color=colors[1], label='True value')
plt.title(r'$p(\theta_1)$', fontsize=16)
plt.legend()
plt.show()
fig = plt.figure(figsize=(12,4))
ax = sns.distplot(trace_1['sigma'], color=colors[0])
plt.title(r'$p(\sigma)$', fontsize=16)
plt.show()
plt.figure(figsize=(16, 10))
plt.scatter(x, y, marker='x', color=colors[0],label='sampled data')
t0 = []
t1 = []
for i in range(100):
ndx = np.random.randint(0, len(trace_1))
theta_0, theta_1 = trace_1[ndx]['theta_0'], trace_1[ndx]['theta_1']
t0.append(theta_0)
t1.append(theta_1)
p = theta_0+theta_1*x
plt.plot(x, p, c=colors[3], alpha=.1)
plt.plot(x, true_regression_line, color=colors[1], label='true regression line', lw=3.)
theta_0_mean = np.array(t0).mean()
theta_1_mean = np.array(t1).mean()
plt.plot(x, theta_0_mean+theta_1_mean*x, color=colors[8], label='average regression line', lw=3.)
plt.xlabel('x', fontsize=12)
plt.ylabel('y', fontsize=12)
plt.title('Posterior predictive regression lines', fontsize=16)
plt.legend(loc=0, fontsize=14)
plt.show()
###Output
_____no_output_____ |
silver/D01_Discrete_Fourier_Transform.ipynb | ###Markdown
prepared by Özlem Salehi (QTurkey) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Discrete Fourier Transform Transformations are popular in mathematics and computer science. They help *transforming* a problem into another problem whose solution is known. In this notebook we will cover *Fourier Transform*. Discrete Fourier Transform ($DFT$) is a mapping that transforms a set of complex numbers into another set of complex numbers.Suppose that we have an $N$-dimensional complex vector $x=\myvector{x_0~x_1\dots~x_{N-1}}^T$. $DFT$ of $x$ is the complex vector $y=\myvector{y_0~y_1\dots y_{N-1}}^T$ where$$y_k=\frac{1}{\sqrt{N}} \sum_{j=0}^{N-1}e^{\frac{2\pi i j k }{N}}x_j.$$ Task 1 (on paper)Given $x=\myvector{1 \\ 2}$, apply $DFT$ and obtain $y$. click for our solution Task 2Create the following list in Python (1 0 0 0 0 1 0 0 0 0 ... 1 0 0 0 0) of length $N=100$ where every 5'th value is a 1. Then compute its $DFT$ using Python and visualize.
###Code
#Create the list
#Compute DFT
from cmath import exp
from math import pi
from math import sqrt
#Visualize
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
click for our solution Task 3Repeat Task 2 where this time every 6'th value is a 1 and the rest is 0.
###Code
#Create the list
#Compute DFT
from cmath import exp
from math import pi
from math import sqrt
#Visualize
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
prepared by Özlem Salehi (QTurkey) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Discrete Fourier Transform Transformations are popular in mathematics and computer science. They help *transforming* a problem into another problem whose solution is known. In this notebook we will cover *Fourier Transform*. Discrete Fourier Transform ($DFT$) is a mapping that transforms a set of complex numbers into another set of complex numbers.Suppose that we have an $N$-dimensional complex vector $x=\myvector{x_0~x_1\dots~x_{N-1}}^T$. $DFT$ of $x$ is the complex vector $y=\myvector{y_0~y_1\dots y_{N-1}}^T$ where$$y_k=\frac{1}{\sqrt{N}} \sum_{j=0}^{N-1}e^{\frac{2\pi i j k }{N}}x_j.$$ Task 1 (on paper)Given $x=\myvector{1 \\ 2}$, apply $DFT$ and obtain $y$. click for our solution
###Code
import numpy as np
from numpy import pi, sin, cos, exp, array, sqrt
def fourier_transform(x):
N = len(x)
y = np.zeros(N)
i = complex(0, 1)
x = np.complex64(x)
for l in range(N):
for j in range(N):
y[l] = y[l] + 1/sqrt(N) * exp(2*pi*l*j/N) * x[j]
return y
###Output
_____no_output_____
###Markdown
Task 2Create the following list in Python (1 0 0 0 0 1 0 0 0 0 ... 1 0 0 0 0) of length $N=100$ where every 5'th value is a 1. Then compute its $DFT$ using Python and visualize.
###Code
#Compute DFT
from cmath import exp
from math import pi
from math import sqrt
#Visualize
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
click for our solution Task 3Repeat Task 2 where this time every 6'th value is a 1 and the rest is 0.
###Code
#Create the list
#Compute DFT
from cmath import exp
from math import pi
from math import sqrt
#Visualize
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
prepared by Özlem Salehi (QTurkey) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Discrete Fourier Transform Transformations are popular in mathematics and computer science. They help *transforming* a problem into another problem whose solution is known. In this notebook we will cover *Fourier Transform*. Discrete Fourier Transform ($DFT$) is a mapping that transforms a set of complex numbers into another set of complex numbers.Suppose that we have an $N$-dimensional complex vector $x=\myvector{x_0~x_1\dots~x_{N-1}}^T$. $DFT$ of $x$ is the complex vector $y=\myvector{y_0~y_1\dots y_{N-1}}^T$ where$$y_k=\frac{1}{\sqrt{N}} \sum_{j=0}^{N-1}e^{\frac{2\pi i j k }{N}}x_j.$$ Task 1 (on paper)Given $x=\myvector{1 \\ 2}$, apply $DFT$ and obtain $y$. click for our solution Task 2Create the following list in Python (1 0 0 0 0 1 0 0 0 0 ... 1 0 0 0 0) of length $N=100$ where every 5'th value is a 1. Then compute its $DFT$ using Python and visualize.
###Code
#Create the list
#Compute DFT
from cmath import exp
from math import pi
from math import sqrt
#Visualize
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
click for our solution Task 3Repeat Task 2 where this time every 6'th value is a 1 and the rest is 0.
###Code
#Create the list
#Compute DFT
from cmath import exp
from math import pi
from math import sqrt
#Visualize
import matplotlib.pyplot as plt
###Output
_____no_output_____ |
linearRegression_forecast/Z.TEST.ipynb | ###Markdown
input_fn
###Code
import pandas as pd
def input_fn(input_data, request_content_type='text/csv'):
"""
input_data를 추론 형태에 맞게 변환 합니다.
"""
print("input_fn-request_content_type: ", request_content_type)
# content_type 을 변수를 지정 합니다.
content_type = request_content_type.lower(
) if request_content_type else "text/csv"
if isinstance(input_data, str):
str_buffer = input_data
else:
str_buffer = str(input_data,'utf-8')
df = pd.read_csv(StringIO(input_data), header=None)
# text/csv 만을 처리 합니다.
if (content_type == 'text/csv' or content_type == 'text/csv; charset=utf-8'):
n_feature = df.shape[1]
sample = df.reshape(-1,n_feature)
return sample
else:
raise ValueError("{} not supported by script!".format(content_type))
# 추론형태의 입력 데이터 포맷 변경 : 여기서는 10개를 테스트 합니다.
sample = input_fn(test_X[0:10])
###Output
_____no_output_____
###Markdown
Standard Scaler 데이터 표준화
###Code
from sklearn.preprocessing import MinMaxScaler
def normalize(raw_df):
df = raw_df.copy()
scaler = MinMaxScaler()
cols = df.columns
data = df.values
s_data = scaler.fit_transform(data)
df = pd.DataFrame(s_data, columns=cols)
# y_scaler
y_scaler = MinMaxScaler()
y_data = df.iloc[:,0].values
y_data = np.array(y_data)
print(y_data.shape)
y_data = y_data.reshape(-1,len(y_data))
y_data = y_scaler.fit_transform(y_data)
return df, scaler, y_scaler
# gas_cols = gas.columns
gas, scaler, y_scaler = normalize(gas)
from sklearn.preprocessing import MinMaxScaler
import pandas as pd
df = pd.DataFrame({
"A" : [0, 1, 2, 3, 4],
"B" : [25, 50, 75, 100, 125]})
min_max_scaler = MinMaxScaler()
print(df)
df[["A", "B"]] = min_max_scaler.fit_transform(df[["A", "B"]])
print(df)
print(type(df))
###Output
A B
0 0 25
1 1 50
2 2 75
3 3 100
4 4 125
A B
0 0.00 0.00
1 0.25 0.25
2 0.50 0.50
3 0.75 0.75
4 1.00 1.00
<class 'pandas.core.frame.DataFrame'>
###Markdown
Log Transform
###Code
import numpy as np
Y = 100
import numpy as np
Y = np.log1p(Y)
print("Y: ", Y)
back = np.expm1(Y)
print("back: ", back)
###Output
Y: 4.61512051684126
back: 100.00000000000003
###Markdown
Ridge Regression
###Code
# make a prediction with a ridge regression model on the dataset
from pandas import read_csv
from sklearn.linear_model import Ridge
# load the dataset
url = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/housing.csv'
dataframe = read_csv(url, header=None)
data = dataframe.values
X, y = data[:, :-1], data[:, -1]
# define model
model = Ridge(alpha=1.0)
# fit model
model.fit(X, y)
print(X.shape)
row = [0.00632,18.00,2.310,0,0.5380,6.5750,65.20,4.0900,1,296.0,15.30,396.90,4.98]
print(len(row))
row2 = X[0:1].tolist()
# define new data
# make a prediction
# yhat = model.predict([row])
# yhat = model.predict([row2])
yhat = model.predict(row2)
# summarize prediction
print('Predicted: %.3f' % yhat)
###Output
Predicted: 30.253
|
EventDec/event_dec/notebook/2_Modeling.ipynb | ###Markdown
ModelingML Tasks
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Input
###Code
file_path = "../data/df_model01.pkl"
df = pd.read_pickle(file_path)
print(df.shape)
df.head()
###Output
(75, 36)
###Markdown
__Binning__
###Code
bins = [10, 40, 120, 180] # SR, LNZ, SZG, VIE, >VIE
df["binned_distance"] = np.digitize(df.distance.values, bins=bins)
###Output
_____no_output_____
###Markdown
__Conversion for Scikit-learn__Feature selection based on expert knowledge. Model-based hardly interpretable, at least confirmed "binned_distance" as relevant feature.
###Code
feature_names = ["buzzwordy_title", "main_topic_Daten", "binned_distance"]
X = df[feature_names].values
y = df.rating.map(lambda x: 1 if x>5 else 0).values # binary target: >5 (better as all the same) was worth attending
print("X:", X.shape, "y:", y.shape)
###Output
X: (75, 3) y: (75,)
###Markdown
Train-Test Split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=23,
test_size=0.5) # 50% split, small dataset size
###Output
_____no_output_____
###Markdown
Modeling Model
###Code
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeClassifier
linreg = LinearRegression() # Benchmark model
dec_tree = DecisionTreeClassifier() # Actual model
###Output
_____no_output_____
###Markdown
Benchmark_Linear Regression_
###Code
# Benchmark
linreg.fit(X_train, y_train)
print("Score (r^2): {:.3f}".format(linreg.score(X_test, y_test)))
print("Coef: {}".format(linreg.coef_))
###Output
Score (r^2): -0.220
Coef: [ 0.02485585 0.04313925 0.08000478]
###Markdown
=> Really bad perfomance Model_Decision Tree CV_<= Ability to visualize, hasn't performed much worse than other applied models __Parameter Tuning__
###Code
from sklearn.model_selection import GridSearchCV
parameter_grid = {"criterion": ["gini", "entropy"],
"max_depth": [None, 1, 2, 3, 4, 5, 6],
"min_samples_leaf": list(range(1, 14)),
"max_leaf_nodes": list(range(3, 25))}
grid_search = GridSearchCV(DecisionTreeClassifier(presort=True), parameter_grid, cv=5) # 5 fold cross-val
grid_search.fit(X_train, y_train)
print("Score (Accuracy): {:.3f}".format(grid_search.score(X_test, y_test)))
print("Best Estimator: {}".format(grid_search.best_estimator_))
print("Best Parameters: {}".format(grid_search.best_params_))
###Output
Score (Accuracy): 0.632
Best Estimator: DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=None,
max_features=None, max_leaf_nodes=5, min_impurity_split=1e-07,
min_samples_leaf=2, min_samples_split=2,
min_weight_fraction_leaf=0.0, presort=True, random_state=None,
splitter='best')
Best Parameters: {'criterion': 'gini', 'max_depth': None, 'max_leaf_nodes': 5, 'min_samples_leaf': 2}
###Markdown
=> Not that good accuracy, but at least better than random draw __Build Model__
###Code
model = DecisionTreeClassifier(presort=True, criterion="gini", max_depth=None,
min_samples_leaf=2, max_leaf_nodes=5)
model.fit(X_train, y_train)
print("Score (Accuracy): {:.3f}".format(model.score(X_test, y_test)))
###Output
Score (Accuracy): 0.632
###Markdown
__Print Decision Tree__
###Code
from sklearn.tree import export_graphviz
export_graphviz(model, class_names=True, feature_names=feature_names,
rounded=True, filled=True, label="root", impurity=False, proportion=True,
out_file="plots/dectree_Model_best.dot")
###Output
_____no_output_____
###Markdown
Model Evaluation __Evaluation Scores__
###Code
from sklearn.metrics import classification_report
y_pred = model.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.71 0.77 0.74 26
1 0.40 0.33 0.36 12
avg / total 0.62 0.63 0.62 38
###Markdown
=> Not so good in predicting class 1 ("worth attending), better in predicting class 0. Weighted average of precision and recall 0.62 (F1 Score) __ROC Curve__
###Code
from sklearn.metrics import roc_curve, roc_auc_score
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
plt.plot(fpr, tpr, label="ROC")
plt.plot([0, 1], c="r", label="Chance level")
plt.title("ROC Curve")
plt.xlabel("False Positive Rate")
plt.ylabel("Recall")
plt.legend(loc=4)
plt.savefig("plots/ROC_Curve_Model.png", dpi=180)
plt.show()
print("AUC: {:.3f}".format(roc_auc_score(y_test, y_pred)))
###Output
_____no_output_____
###Markdown
=> Indeed only slightly better than random guess, ~ 0.05 percentage points (AUC)
###Code
from sklearn.model_selection import cross_val_score
scores_benchmark = cross_val_score(model, X, y, cv=5) # 5 folds
print("Cross-Val Scores (Accuracy): {}".format(scores_benchmark))
print("Cross-Val Mean (Accuracy): {}".format(scores_benchmark.mean()))
###Output
Cross-Val Scores (Accuracy): [ 0.5625 0.625 0.46666667 0.71428571 0.64285714]
Cross-Val Mean (Accuracy): 0.6022619047619048
###Markdown
=> Generalization acceptable, but not good (as expected from a Decision Tree) Persist Model
###Code
from sklearn.externals import joblib
file_path = "../data/model_trained.pkl"
joblib.dump(model, file_path)
###Output
_____no_output_____ |
examples/District optimization/Urban District Optimization Workflow.ipynb | ###Markdown
Workflow for a district optimizationIn this application of the FINE framework, a small district is modeled and optimized.All classes which are available to the user are utilized and examples of the selection of different parameters within these classes are given.The workflow is structures as follows:1. Required packages are imported and the input data path is set2. An energy system model instance is created3. Commodity sources are added to the energy system model4. Commodity conversion components are added to the energy system model5. Commodity storages are added to the energy system model6. Commodity transmission components are added to the energy system model7. Commodity sinks are added to the energy system model8. The energy system model is optimized9. Selected optimization results are presented 1. Import required packages and set input data pathThe FINE framework is imported which provides the required classes and functions for modeling the energy system.
###Code
import FINE as fn
from getData import getData
import pandas as pd
data = getData()
%load_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
2. Create an energy system model instance The structure of the energy system model is given by the considered locations, commodities, the number of time steps as well as the hours per time step.The commodities are specified by a unit (i.e. 'GW_electric', 'GW_H2lowerHeatingValue', 'Mio. t CO2/h') which can be given as an energy or mass unit per hour. Furthermore, the cost unit and length unit are specified.
###Code
locations = data['locations']
commodityUnitDict = {'electricity': 'kW_el', 'methane': 'kW_CH4_LHV','heat':'kW_th'}
commodities = {'electricity','methane','heat'}
numberOfTimeSteps=8760
hoursPerTimeStep=1
esM = fn.EnergySystemModel(locations=locations, commodities=commodities, numberOfTimeSteps=8760,
commodityUnitsDict=commodityUnitDict,
hoursPerTimeStep=1, costUnit='€', lengthUnit='m', verboseLogLevel=2)
###Output
_____no_output_____
###Markdown
3. Add commodity sources to the energy system model Electricity Purchase
###Code
esM.add(fn.Source(esM=esM, name='Electricity purchase', commodity='electricity', hasCapacityVariable=False,
operationRateMax=data['El Purchase, operationRateMax'], commodityCost=0.298))
###Output
_____no_output_____
###Markdown
Natural Gas Purchase
###Code
esM.add(fn.Source(esM=esM, name='NaturalGas purchase', commodity='methane', hasCapacityVariable=False,
operationRateMax=data['NG Purchase, operationRateMax'], commodityCost=0.065))
###Output
_____no_output_____
###Markdown
PV
###Code
esM.add(fn.Source(esM=esM,
name='PV',
commodity='electricity',
hasCapacityVariable=True,
hasIsBuiltBinaryVariable=True,
operationRateMax=data['PV, operationRateMax'],
capacityMax=data['PV, capacityMax'],
interestRate = 0.04,
economicLifetime = 20,
investIfBuilt=1000,
investPerCapacity=1400,
opexIfBuilt = 10,
bigM = 40))
###Output
_____no_output_____
###Markdown
4. Add conversion components to the energy system model Boiler
###Code
esM.add(fn.Conversion(esM=esM,
name='Boiler',
physicalUnit = 'kW_th',
commodityConversionFactors={'methane':-1.1, 'heat':1},
hasIsBuiltBinaryVariable=True,
hasCapacityVariable=True,
interestRate = 0.04,
economicLifetime = 20,
investIfBuilt=2800,
investPerCapacity=100,
opexIfBuilt = 24,
bigM = 200))
###Output
_____no_output_____
###Markdown
5. Add commodity storages to the energy system model Thermal Storage
###Code
esM.add(fn.Storage(esM=esM,
name='Thermal Storage',
commodity='heat',
selfDischarge=0.001,
hasIsBuiltBinaryVariable=True,
capacityMax=data['TS, capacityMax'],
interestRate = 0.04,
economicLifetime = 25,
investIfBuilt=23,
investPerCapacity=24,
bigM = 250))
###Output
_____no_output_____
###Markdown
Battery Storage
###Code
esM.add(fn.Storage(esM=esM,
name='Battery Storage',
commodity='electricity',
cyclicLifetime=10000,
chargeEfficiency=0.95,
dischargeEfficiency=0.95,
chargeRate=0.5,
dischargeRate=0.5,
hasIsBuiltBinaryVariable=True,
capacityMax=data['BS, capacityMax'],
interestRate = 0.04,
economicLifetime = 12,
investIfBuilt=2000,
investPerCapacity=700,
bigM = 110))
###Output
_____no_output_____
###Markdown
6. Add commodity transmission components to the energy system model Cable Electricty
###Code
esM.add(fn.Transmission(esM=esM,
name='E_Distribution_Grid',
commodity='electricity',
losses=0.00001,
distances = data['cables, distances'],
capacityFix=data['cables, capacityFix']))
###Output
_____no_output_____
###Markdown
Natural Gas Pipeline
###Code
esM.add(fn.Transmission(esM=esM,
name='NG_Distribution_Grid',
commodity='methane',
distances = data['NG, distances'],
capacityFix=data['NG, capacityFix']))
###Output
_____no_output_____
###Markdown
7. Add commodity sinks to the energy system model Electricity Demand
###Code
esM.add(fn.Sink(esM=esM, name='Electricity demand', commodity='electricity',
hasCapacityVariable=False, operationRateFix=data['Electricity demand, operationRateFix']))
###Output
_____no_output_____
###Markdown
Heat Demand
###Code
esM.add(fn.Sink(esM=esM, name='BuildingsHeat', commodity='heat',
hasCapacityVariable=False, operationRateFix=data['Heat demand, operationRateFix']))
###Output
_____no_output_____
###Markdown
8. Optimize energy system model All components are now added to the model and the model can be optimized. If the computational complexity of the optimization should be reduced, the time series data of the specified components can be clustered before the optimization and the parameter timeSeriesAggregation is set to True in the optimize call.
###Code
esM.cluster(numberOfTypicalPeriods=7)
esM.optimize(timeSeriesAggregation=True, optimizationSpecs='cuts=0 method=2')
###Output
Academic license - for non-commercial use only
Changed value of parameter QCPDual to 1
Prev: 0 Min: 0 Max: 1 Default: 0
Freed default Gurobi environment
###Markdown
9. Selected results output Sources and Sink
###Code
esM.getOptimizationSummary("SourceSinkModel", outputLevel=2)
fig, ax = fn.plotOperationColorMap(esM, 'PV', 'bd1')
fig, ax = fn.plotOperationColorMap(esM, 'Electricity demand', 'bd1')
fig, ax = fn.plotOperationColorMap(esM, 'Electricity purchase', 'transformer')
fig, ax = fn.plotOperationColorMap(esM, 'NaturalGas purchase', 'transformer')
###Output
_____no_output_____
###Markdown
Conversion
###Code
esM.getOptimizationSummary("ConversionModel", outputLevel=2)
fig, ax = fn.plotOperationColorMap(esM, 'Boiler', 'bd1')
###Output
_____no_output_____
###Markdown
Storage
###Code
esM.getOptimizationSummary("StorageModel", outputLevel=2)
fig, ax = fn.plotOperationColorMap(esM, 'Thermal Storage', 'bd1',
variableName='stateOfChargeOperationVariablesOptimum')
###Output
_____no_output_____
###Markdown
Transmission
###Code
esM.getOptimizationSummary("TransmissionModel", outputLevel=2)
###Output
_____no_output_____
###Markdown
Workflow for a district optimizationIn this application of the FINE framework, a small district is modeled and optimized.All classes which are available to the user are utilized and examples of the selection of different parameters within these classes are given.The workflow is structures as follows:1. Required packages are imported and the input data path is set2. An energy system model instance is created3. Commodity sources are added to the energy system model4. Commodity conversion components are added to the energy system model5. Commodity storages are added to the energy system model6. Commodity transmission components are added to the energy system model7. Commodity sinks are added to the energy system model8. The energy system model is optimized9. Selected optimization results are presented 1. Import required packages and set input data pathThe FINE framework is imported which provides the required classes and functions for modeling the energy system.
###Code
import FINE as fn
from getData import getData
import pandas as pd
data = getData()
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
2. Create an energy system model instance The structure of the energy system model is given by the considered locations, commodities, the number of time steps as well as the hours per time step.The commodities are specified by a unit (i.e. 'GW_electric', 'GW_H2lowerHeatingValue', 'Mio. t CO2/h') which can be given as an energy or mass unit per hour. Furthermore, the cost unit and length unit are specified.
###Code
locations = data['locations']
commodityUnitDict = {'electricity': 'kW_el', 'methane': 'kW_CH4_LHV','heat':'kW_th'}
commodities = {'electricity','methane','heat'}
numberOfTimeSteps=8760
hoursPerTimeStep=1
esM = fn.EnergySystemModel(locations=locations, commodities=commodities, numberOfTimeSteps=8760,
commodityUnitsDict=commodityUnitDict,
hoursPerTimeStep=1, costUnit='€', lengthUnit='m', verboseLogLevel=2)
###Output
_____no_output_____
###Markdown
3. Add commodity sources to the energy system model Electricity Purchase
###Code
esM.add(fn.Source(esM=esM, name='Electricity purchase', commodity='electricity', hasCapacityVariable=False,
operationRateMax=data['El Purchase, operationRateMax'], commodityCost=0.298))
###Output
_____no_output_____
###Markdown
Natural Gas Purchase
###Code
esM.add(fn.Source(esM=esM, name='NaturalGas purchase', commodity='methane', hasCapacityVariable=False,
operationRateMax=data['NG Purchase, operationRateMax'], commodityCost=0.065))
###Output
_____no_output_____
###Markdown
PV
###Code
esM.add(fn.Source(esM=esM,
name='PV',
commodity='electricity',
hasCapacityVariable=True,
hasIsBuiltBinaryVariable=True,
operationRateMax=data['PV, operationRateMax'],
capacityMax=data['PV, capacityMax'],
interestRate = 0.04,
economicLifetime = 20,
investIfBuilt=1000,
investPerCapacity=1400,
opexIfBuilt = 10,
bigM = 40))
###Output
_____no_output_____
###Markdown
4. Add conversion components to the energy system model Boiler
###Code
esM.add(fn.Conversion(esM=esM,
name='Boiler',
physicalUnit = 'kW_th',
commodityConversionFactors={'methane':-1.1, 'heat':1},
hasIsBuiltBinaryVariable=True,
hasCapacityVariable=True,
interestRate = 0.04,
economicLifetime = 20,
investIfBuilt=2800,
investPerCapacity=100,
opexIfBuilt = 24,
bigM = 200))
###Output
_____no_output_____
###Markdown
5. Add commodity storages to the energy system model Thermal Storage
###Code
esM.add(fn.Storage(esM=esM,
name='Thermal Storage',
commodity='heat',
selfDischarge=0.001,
hasIsBuiltBinaryVariable=True,
capacityMax=data['TS, capacityMax'],
interestRate = 0.04,
economicLifetime = 25,
investIfBuilt=23,
investPerCapacity=24,
bigM = 250))
###Output
_____no_output_____
###Markdown
Battery Storage
###Code
esM.add(fn.Storage(esM=esM,
name='Battery Storage',
commodity='electricity',
cyclicLifetime=10000,
chargeEfficiency=0.95,
dischargeEfficiency=0.95,
chargeRate=0.5,
dischargeRate=0.5,
hasIsBuiltBinaryVariable=True,
capacityMax=data['BS, capacityMax'],
interestRate = 0.04,
economicLifetime = 12,
investIfBuilt=2000,
investPerCapacity=700,
bigM = 110))
###Output
_____no_output_____
###Markdown
6. Add commodity transmission components to the energy system model Cable Electricty
###Code
esM.add(fn.Transmission(esM=esM,
name='E_Distribution_Grid',
commodity='electricity',
losses=0.00001,
distances = data['cables, distances'],
capacityFix=data['cables, capacityFix']))
###Output
_____no_output_____
###Markdown
Natural Gas Pipeline
###Code
esM.add(fn.Transmission(esM=esM,
name='NG_Distribution_Grid',
commodity='methane',
distances = data['NG, distances'],
capacityFix=data['NG, capacityFix']))
###Output
_____no_output_____
###Markdown
7. Add commodity sinks to the energy system model Electricity Demand
###Code
esM.add(fn.Sink(esM=esM, name='Electricity demand', commodity='electricity',
hasCapacityVariable=False, operationRateFix=data['Electricity demand, operationRateFix']))
###Output
_____no_output_____
###Markdown
Heat Demand
###Code
esM.add(fn.Sink(esM=esM, name='BuildingsHeat', commodity='heat',
hasCapacityVariable=False, operationRateFix=data['Heat demand, operationRateFix']))
###Output
_____no_output_____
###Markdown
8. Optimize energy system model All components are now added to the model and the model can be optimized. If the computational complexity of the optimization should be reduced, the time series data of the specified components can be clustered before the optimization and the parameter timeSeriesAggregation is set to True in the optimize call.
###Code
esM.cluster(numberOfTypicalPeriods=7)
esM.optimize(timeSeriesAggregation=True, logFileName='', optimizationSpecs='cuts=0 method=2')
###Output
Academic license - for non-commercial use only
Changed value of parameter QCPDual to 1
Prev: 0 Min: 0 Max: 1 Default: 0
Freed default Gurobi environment
###Markdown
9. Selected results output Sources and Sink
###Code
esM.getOptimizationSummary("SourceSinkModel", outputLevel=2)
###Output
_____no_output_____
###Markdown
Conversion
###Code
esM.getOptimizationSummary("ConversionModel", outputLevel=2)
###Output
_____no_output_____
###Markdown
Storage
###Code
esM.getOptimizationSummary("StorageModel", outputLevel=2)
###Output
_____no_output_____
###Markdown
Transmission
###Code
esM.getOptimizationSummary("TransmissionModel", outputLevel=2)
###Output
_____no_output_____
###Markdown
Workflow for a district optimizationIn this application of the FINE framework, a small district is modeled and optimized.All classes which are available to the user are utilized and examples of the selection of different parameters within these classes are given.The workflow is structures as follows:1. Required packages are imported and the input data path is set2. An energy system model instance is created3. Commodity sources are added to the energy system model4. Commodity conversion components are added to the energy system model5. Commodity storages are added to the energy system model6. Commodity transmission components are added to the energy system model7. Commodity sinks are added to the energy system model8. The energy system model is optimized9. Selected optimization results are presented 1. Import required packages and set input data pathThe FINE framework is imported which provides the required classes and functions for modeling the energy system.
###Code
import FINE as fn
from getData import getData
import pandas as pd
data = getData()
%load_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
2. Create an energy system model instance The structure of the energy system model is given by the considered locations, commodities, the number of time steps as well as the hours per time step.The commodities are specified by a unit (i.e. 'GW_electric', 'GW_H2lowerHeatingValue', 'Mio. t CO2/h') which can be given as an energy or mass unit per hour. Furthermore, the cost unit and length unit are specified.
###Code
locations = data['locations']
commodityUnitDict = {'electricity': 'kW_el', 'methane': 'kW_CH4_LHV','heat':'kW_th'}
commodities = {'electricity','methane','heat'}
numberOfTimeSteps=8760
hoursPerTimeStep=1
esM = fn.EnergySystemModel(locations=locations, commodities=commodities, numberOfTimeSteps=8760,
commodityUnitsDict=commodityUnitDict,
hoursPerTimeStep=1, costUnit='€', lengthUnit='m', verboseLogLevel=2)
###Output
_____no_output_____
###Markdown
3. Add commodity sources to the energy system model Electricity Purchase
###Code
esM.add(fn.Source(esM=esM, name='Electricity purchase', commodity='electricity', hasCapacityVariable=False,
operationRateMax=data['El Purchase, operationRateMax'], commodityCost=0.298))
###Output
_____no_output_____
###Markdown
Natural Gas Purchase
###Code
esM.add(fn.Source(esM=esM, name='NaturalGas purchase', commodity='methane', hasCapacityVariable=False,
operationRateMax=data['NG Purchase, operationRateMax'], commodityCost=0.065))
###Output
_____no_output_____
###Markdown
PV
###Code
esM.add(fn.Source(esM=esM,
name='PV',
commodity='electricity',
hasCapacityVariable=True,
hasIsBuiltBinaryVariable=True,
operationRateMax=data['PV, operationRateMax'],
capacityMax=data['PV, capacityMax'],
interestRate = 0.04,
economicLifetime = 20,
investIfBuilt=1000,
investPerCapacity=1400,
opexIfBuilt = 10,
bigM = 40))
###Output
_____no_output_____
###Markdown
4. Add conversion components to the energy system model Boiler
###Code
esM.add(fn.Conversion(esM=esM,
name='Boiler',
physicalUnit = 'kW_th',
commodityConversionFactors={'methane':-1.1, 'heat':1},
hasIsBuiltBinaryVariable=True,
hasCapacityVariable=True,
interestRate = 0.04,
economicLifetime = 20,
investIfBuilt=2800,
investPerCapacity=100,
opexIfBuilt = 24,
bigM = 200))
###Output
_____no_output_____
###Markdown
5. Add commodity storages to the energy system model Thermal Storage
###Code
esM.add(fn.Storage(esM=esM,
name='Thermal Storage',
commodity='heat',
selfDischarge=0.001,
hasIsBuiltBinaryVariable=True,
capacityMax=data['TS, capacityMax'],
interestRate = 0.04,
economicLifetime = 25,
investIfBuilt=23,
investPerCapacity=24,
bigM = 250))
###Output
_____no_output_____
###Markdown
Battery Storage
###Code
esM.add(fn.Storage(esM=esM,
name='Battery Storage',
commodity='electricity',
cyclicLifetime=10000,
chargeEfficiency=0.95,
dischargeEfficiency=0.95,
chargeRate=0.5,
dischargeRate=0.5,
hasIsBuiltBinaryVariable=True,
capacityMax=data['BS, capacityMax'],
interestRate = 0.04,
economicLifetime = 12,
investIfBuilt=2000,
investPerCapacity=700,
bigM = 110))
###Output
_____no_output_____
###Markdown
6. Add commodity transmission components to the energy system model Cable Electricty
###Code
esM.add(fn.Transmission(esM=esM,
name='E_Distribution_Grid',
commodity='electricity',
losses=0.00001,
distances = data['cables, distances'],
capacityFix=data['cables, capacityFix']))
###Output
_____no_output_____
###Markdown
Natural Gas Pipeline
###Code
esM.add(fn.Transmission(esM=esM,
name='NG_Distribution_Grid',
commodity='methane',
distances = data['NG, distances'],
capacityFix=data['NG, capacityFix']))
###Output
_____no_output_____
###Markdown
7. Add commodity sinks to the energy system model Electricity Demand
###Code
esM.add(fn.Sink(esM=esM, name='Electricity demand', commodity='electricity',
hasCapacityVariable=False, operationRateFix=data['Electricity demand, operationRateFix']))
###Output
_____no_output_____
###Markdown
Heat Demand
###Code
esM.add(fn.Sink(esM=esM, name='BuildingsHeat', commodity='heat',
hasCapacityVariable=False, operationRateFix=data['Heat demand, operationRateFix']))
###Output
_____no_output_____
###Markdown
8. Optimize energy system model All components are now added to the model and the model can be optimized. If the computational complexity of the optimization should be reduced, the time series data of the specified components can be clustered before the optimization and the parameter timeSeriesAggregation is set to True in the optimize call.
###Code
esM.cluster(numberOfTypicalPeriods=7)
esM.optimize(timeSeriesAggregation=True, optimizationSpecs='cuts=0 method=2')
###Output
Academic license - for non-commercial use only
Changed value of parameter QCPDual to 1
Prev: 0 Min: 0 Max: 1 Default: 0
Freed default Gurobi environment
###Markdown
9. Selected results output Sources and Sink
###Code
esM.getOptimizationSummary("SourceSinkModel", outputLevel=2)
fig, ax = fn.plotOperationColorMap(esM, 'PV', 'bd1')
fig, ax = fn.plotOperationColorMap(esM, 'Electricity demand', 'bd1')
fig, ax = fn.plotOperationColorMap(esM, 'Electricity purchase', 'transformer')
fig, ax = fn.plotOperationColorMap(esM, 'NaturalGas purchase', 'transformer')
###Output
_____no_output_____
###Markdown
Conversion
###Code
esM.getOptimizationSummary("ConversionModel", outputLevel=2)
fig, ax = fn.plotOperationColorMap(esM, 'Boiler', 'bd1')
###Output
_____no_output_____
###Markdown
Storage
###Code
esM.getOptimizationSummary("StorageModel", outputLevel=2)
fig, ax = fn.plotOperationColorMap(esM, 'Thermal Storage', 'bd1',
variableName='stateOfChargeOperationVariablesOptimum')
###Output
_____no_output_____
###Markdown
Transmission
###Code
esM.getOptimizationSummary("TransmissionModel", outputLevel=2)
###Output
_____no_output_____ |
book/compressible-flows/isentropic.ipynb | ###Markdown
Isentropic, variable-area flows
###Code
# Necessary modules to solve problems
import numpy as np
import pandas as pd
from scipy.optimize import root_scalar
%matplotlib inline
from matplotlib import pyplot as plt
# these lines are only for helping improve the display
import matplotlib_inline.backend_inline
matplotlib_inline.backend_inline.set_matplotlib_formats('pdf', 'png')
plt.rcParams['figure.dpi']= 150
plt.rcParams['savefig.dpi'] = 150
###Output
_____no_output_____
###Markdown
Varying-area adiabatic flowsArea change is one of the important factors that can adjust flow propertiesin compressible flow systems. (The others are friction and heat transfer, whichwe will cover briefly later.)For now, let's proceed to analyze flows with the following conditions/assumptions:- steady, one-dimensional flow- adiabatic: $ \delta q = 0 $, $ d s_e = 0$- no shaft work: $ \delta w_s = 0 $- no or negligible change in potential energy: $ dz = 0 $- no losses (i.e., reversible): $d s_i = 0$As a result of the flow being adiabatic and reversible, it is also isentropic: $ds = 0$.Our goal is now to see how changes in pressure, density, and velocity relate with changing area. Starting with the energy equation, apply our conditions:$$\delta q = \delta w_s + dh + \frac{dV^2}{2} + g dz \\dh = -V dV$$We also have our thermodynamic relationships derived from Gibbs' identities:$$T ds = dh - \frac{dp}{\rho} \\dh = \frac{dp}{\rho}$$Combined together, we have$$dV = - \frac{dp}{\rho V} \;.$$Substituting this, and the speed of sound ($ dp = a^2 d\rho $) into the continuity equation:\begin{align*}0 &= \frac{d\rho}{\rho} + \frac{dA}{A} + \frac{dA}{A} \\\frac{dp}{\rho} &= V^2 \left( \frac{d\rho}{\rho} + \frac{dA}{A} \right) \\\frac{d\rho}{\rho} &= M^2 \left( \frac{d\rho}{\rho} + \frac{dA}{A} \right)\end{align*}we obtain a relationship between area change and density change:$$\frac{d\rho}{\rho} = \left( \frac{M^2}{1-M^2} \right) \frac{dA}{A} \;.$$ (eq_density)Substituting this back into the continuity equation, we obtain a relationship between area change and velocity change:$$\frac{dV}{V} = -\left( \frac{1}{1-M^2} \right) \frac{dA}{A} \;.$$ (eq_velocity)And, finally, recalling that $ dV = -dp/\rho V $, we substitute that into Equation {eq}`eq_velocity` to get a relationsihp between area change and pressure change:$$dp = \rho V^2 \left( \frac{1}{1-M^2} \right) \frac{dA}{A} \;.$$ (eq_pressure)If we focus on situations where the pressure is decreasing ($ dp < 0 $), going from a high-pressure reservoir to a low-pressure receiver, we can examine how changes in the other properties must occur.Examining Equation {eq}`eq_pressure`, we see that $ dp < 0 $ results in either a positive or negative area change for the subsonic and supersonic regimes:$$(-) = \frac{1}{1-M^2} \frac{dA}{A} \;,$$so if $ M 1 $ then $ dA > 0 $.Using Equations {eq}`eq_density` and {eq}`eq_velocity` we can see how changes in density and velocity occur in the different regimes as well:\begin{gather*}M < 1: \quad \frac{d\rho}{\rho} = (+)(-) \rightarrow d\rho < 0 \\M > 1: \quad \frac{d\rho}{\rho} = (-)(+) \rightarrow d\rho < 0\end{gather*}and\begin{gather*}M 0 \\M > 1: \quad \frac{dV}{V} = -(-)(+) \rightarrow dV > 0\end{gather*}In summary, for decreasing pressure ($ dp < 0 $), we have:| property | $ M 1 $ ||:---------|:---------:|:---------:|| $ A $ | ↓ | ↑ || $ \rho $ | ↓ | ↓ || $ V $ | ↑ | ↑ |This particular example shows how properties change in a **nozzle**, which converts pressure/enthalpy to kinetic energy. We can see that a subsonic nozzle has a converging shape (i.e., has a decreasing area) while a supersonic nozzle is diverging (with an increasing area).In contrast, a **diffuser** converts kinetic energy into enthalpy/pressure, and is associated with increasing pressure ($ dp > 0 $). A subsonic diffuser is diverging, while a supersonic diffuser is converging.For propulsion applications, when we need to accelerate a gas from low speed to supersonic speeds, we will need a **converging-diverging nozzle**. This will be the focus of deeper analysis later. Equations for perfect gasesUsing our governing equations, we can generate working equations that apply to more-general flows of perfect/ideal gases, where losses (i.e., irreversibilities) may be present.The flow assumptions are:- steady, one-dimensional flow- adiabatic- no shaft work- perfect/ideal gas- no, or negligible, potential energy changesOur goal is to find relations between properties at two points in the flow that are only a function of the gas, Mach numbers at both locations, and entropy change between the two locations: $ f \left( M_1, M_2, \gamma, \delta s \right) $.Start with the continuity equation applied to a control volume, with flow entering at location 1 and leaving at location 2:$$\begin{gather*}\rho_1 A_1 V_1 = \rho_2 A_2 V_2 \\\frac{A_2}{A_1} = \frac{\rho_1 V_1}{\rho_2 V_2} = \frac{p_1}{p_2} \frac{M_1}{M_2} \left( \frac{T_2}{T_1} \right)^{1/2} \;,\end{gather*}$$ (eq_continuity_area_ratio)where we got the final relation by using the ideal gas law ($ p = \rho R T $), Mach number definition ($ V = M a $), and speed of sound expression in an ideal gas ($ a^2 = \gamma R T $).We can simplify this expression even further by finding ways to express the pressure and temperature ratios as functions of the Mach numbers and gas properties.For the temperature ratio, we can use conservation of energy and our expression for stagnation temperature:$$\begin{gather*}h_{t1} + q = h_{t2} + w_s \\\rightarrow T_{t1} = T_{t2} \\T_t = T \left(1 + \frac{\gamma - 1}{2} M^2 \right) \\\end{gather*}$$$$\therefore \frac{T_2}{T_1} = \frac{1 + \frac{\gamma-1}{2} M_1^2}{1 + \frac{\gamma-1}{2} M_2^2}$$ (eq_temperature_ratio)For pressure, recall our relationship for the ratio of stagnation pressures between two locations:$$\begin{gather*}\frac{p_{t2}}{p_{t1}} = e^{-\Delta s / R} \\p_t = p \left(1 + \frac{\gamma - 1}{2} M^2 \right)^{\gamma / (\gamma-1)} \\\frac{p_{t2}}{p_{t1}} = \frac{p_2}{p_1} \left[ \frac{1 + \frac{\gamma - 1}{2} M_2^2}{1 + \frac{\gamma - 1}{2} M_1^2} \right]^{\gamma / (\gamma-1)} = e^{-\Delta s / R}\end{gather*}$$$$\therefore \frac{p_1}{p_2} = \left[ \frac{1 + \frac{\gamma - 1}{2} M_2^2}{1 + \frac{\gamma - 1}{2} M_1^2} \right]^{\gamma / (\gamma-1)} e^{\Delta s / R} \;.$$ (eq_pressure_ratio)Putting the expressions for $ \frac{p_1}{p_2} $ and $ \frac{T_2}{T_1} $ back into Equation {eq}`eq_continuity_area_ratio`, we can obtain a final expression for the area ratio between two locations as a function of the Mach numbers at the locations, the specific heat ratio of the gas, and any entropy increase between the locations:$$\frac{A_2}{A_1} = \frac{M_1}{M_2} \left[ \frac{1 + \frac{\gamma-1}{2} M_2^2}{1 + \frac{\gamma-1}{2} M_1^2} \right]^{\frac{\gamma+1}{2(\gamma-1)}} e^{\Delta s/R} \;.$$ (eq_area_ratio_loss)Combining Equation {eq}`eq_area_ratio_loss` with {eq}`eq_pressure_ratio` and {eq}`eq_temperature_ratio`, along with the stagnation relationships$$\begin{gather*}T_{t2} = T_{t1} \\\frac{p_{t2}}{p_{t1}} = e^{-\Delta s / R} \;,\end{gather*}$$we have relationships between properties at two locations in any steady, one-dimensional flow with varying area, as long as there is no heat transfer or shaft work. We could also combine Equations {eq}`eq_pressure_ratio` and {eq}`eq_temperature_ratio` with the ideal gas law to get a density ratio:$$\frac{\rho_2}{\rho_1} = \left[ \frac{1 + \frac{\gamma - 1}{2} M_1^2}{1 + \frac{\gamma - 1}{2} M_2^2} \right]^{1 /(\gamma-1)} e^{-\Delta s / R} \;.$$ (eq_density_ratio) Sonic reference stateIf we know the Mach numbers at two locations, it is pretty easy to find the associated ratios of area, temperature, pressure, and/or density using Equations {eq}`eq_temperature_ratio` through {eq}`eq_density_ratio`.However, if we instead know the area ratio or the properties at the two locations, plus one Mach number, and want to find the unknown Mach number, these equations are a bit tougher to solve. We *can* solve them directly using numerical methods, as you will see in a bit, but we can also take advantage of a helpful concept to ease the calculations.Similar to the stagnation reference state, or the state associated with decelerating a fluid flow to rest and zero potential, isentropically, we can introduce the **sonic reference state**, indicated using a * superscript. This is the state associated with either decelerating or accelerating a fluid to sonic velocity (i.e., Mach = 1.0) by some process; currently, that will be through isentropic area change.This reference state may not physically exist in the system, but *could* through appropriate area change. As a result, they are legitimate locationsin a flow system, and so we can use all of our developed equations to consider flow from a real location to this reference location.In particular, we can write Equation {eq}`eq_area_ratio_loss` between the reference locations associated with two real locations: $ A_1 $ and $ A_1^* $, then $ A_2 $ and $ A_2^* $. By definition, the Mach numbers are 1 at the two reference states. So, the area ratio equation becomes:$$\frac{A_2^*}{A_1^*} = e^{\Delta s / R} \;.$$ (eq_reference_area_ratio)This equation expresses how the area associated with reference state changes in a flow system in the presence of losses;for isentropic flow, the reference area $ A^* $ remains constant.Recall our relationship for the stagnation pressure ratio between two locations:$$\frac{p_{t2}}{p_{t1}} = e^{-\Delta s / R} \;.$$If we take the product of these two equations, we get$$p_{t1} A_1^* = p_{t2} A_2^* \;,$$ (eq_pressure_area_product)which gives us a quantity that is conserved in any adiabatic flow. Isentropic relationsFor isentropic flows, where $\Delta s = 0$, Equation {eq}`eq_area_ratio_loss` reduces to$$\frac{A_2}{A_1} = \frac{M_1}{M_2} \left( \frac{1+\frac{\gamma-1}{2} M_2^2}{1+\frac{\gamma-1}{2} M_1^2} \right)^{\frac{\gamma+1}{2(\gamma-1)}} \;.$$ (eq_area_ratio_isentropic)If we have $M_1$ and $M_2$, and know the ideal gas (and therefore $\gamma$) then finding the area ratio between two sections is straightforward using the equation.Working backwards takes slightly more effort, since the equation cannot simply be inverted.Instead, it either needs to be solved numerically using a root-finding algorithm, *or* we can take advantage of the * (sonic) reference state to provide an easy way to solve problems.Let's apply Equation {eq}`eq_area_ratio_isentropic` between any point in the flow and its sonic reference state, such that $A_2 \rightarrow A$ (so $M_2 \rightarrow M$) and $A_1 \rightarrow A^*$ (and $M_1 \rightarrow 1$):$$\frac{A}{A^*} = \frac{1}{M} \left( \frac{1+\frac{\gamma-1}{2} M^2}{\frac{\gamma+1}{2}} \right)^{\frac{\gamma+1}{2(\gamma-1)}} \;,$$ (eq_ref_area_ratio)which shows that $\frac{A}{A^*} = f(\gamma, M)$.For a given value of $\gamma$ (such as 1.4, used for air), it is easy to precalculate and tabulate the reference area ratio vs. values of Mach number.Similarly, the quantities $p/p_t$, $T/T_t$, and $pA/p_t A^*$ can be tabulated as well:$$\frac{p}{p_t} = \left(1 + \frac{\gamma-1}{2} M^2 \right)^{-\gamma/(\gamma-1)}$$ (eq_stag_pressure_ratio)$$\frac{T}{T_t} = \left(1 + \frac{\gamma-1}{2} M^2 \right)^{-1}$$ (eq_stag_temperature_ratio)For example, let's tabulate the values from $M = $ 0 to 0.3:
###Code
gamma = 1.4
machs = np.arange(0, 0.31, 0.01)
# hiding divide by zero warning
with np.errstate(divide='ignore'):
area_ratios = (
(1.0/machs)*(2*(1.0 + 0.5*(gamma-1)*machs**2)/(gamma+1))**(0.5*(gamma+1)/(gamma-1))
)
pressure_ratios = (1.0/(1 + 0.5*(gamma-1)*machs**2))**(gamma/(gamma-1))
temperature_ratios = 1.0 / (1 + 0.5*(gamma-1)*machs**2)
df = pd.DataFrame({
r'$M$': machs, r'$p/p_t$': pressure_ratios, r'$T/T_t$': temperature_ratios,
r'$A/A^*$': area_ratios
})
df.style.\
hide_index().\
format(formatter={(r'$M$'): "{:.2f}"})
###Output
_____no_output_____
###Markdown
With tabulated data like this, we can then solve problems by constructing an appropriate property ratio based on known information and looking up the corresponding Mach number.For example, given a gas ($\gamma$), the areas at two locations ($A_1$ and $A_2$), and the Mach number at one location ($M_1$), we can find the Mach number at the second location ($M_2$) by constructing the reference area ratio:$$\frac{A_2}{A_2^*} = \frac{A_2}{A_1} \frac{A_1}{A_1^*} \frac{A_1^*}{A_2^*} \;,$$where $\frac{A_2}{A_2^*} = f(M_2)$, $\frac{A_2}{A_1}$ is known, $\frac{A_1}{A_1^*} = f(M_1)$, and $\frac{A_1^*}{A_2^*} = 1.0$ for isentropic flow between the two sections.So, we can find $M_2$.```{note}For a given area ratio $\frac{A}{A^*}$, there will always be **two** possible Mach numbers: a subsonic solution and a supersonic solution.```To determine which Mach number is the correct solution, you need to apply other knowledge and problem context. For example, if the initial Mach number is subsonic and the two sections are only connected by a diverging or converging area duct, then the second Mach number must also be subsonic. Example: isentropic flow problem**Problem:** Air flows isentropically through a duct ($\gamma = 1.4$) where the area is changing from point 1 to 2, with no heat transfer or shaft work. The area ratio is $\frac{A_2}{A_1} = 2.5$, the flow starts at $M_1 = 0.5$ and 4 bar.Find the Mach number and pressure at the second point in the duct.We can solve this using the classical approach (pre-calculated isentropic tables) or a numerical approach;both follow the same general approach:1. Find $M_2$ associated with the area ratio $A_2 / A_2^*$, then2. Use that to find the stagnation pressure ratio $p_2 / p_{t2}$.$$\frac{A_2}{A_2^*} = \frac{A_2}{A_1} \frac{A_1}{A_1^*} \frac{A_1^*}{A_2^*} \;,$$where $\frac{A_2}{A_1} = 2.5$ is given, we can find $\frac{A_1}{A_1^*}$ using$$\frac{A}{A^*} = \frac{1}{M} \left( \frac{1 + \frac{\gamma - 1}{2} M^2}{\frac{\gamma+1}{2}} \right)^{\frac{\gamma+1}{2(\gamma-1)}} \;,$$(either by calculating or looking up in the $\gamma = 1.4$ table) and $\frac{A_1^*}{A_2^*} = 1$ because the flow is isentropic.
###Code
gamma = 1.4
mach_1 = 0.5
A2_A1 = 2.5
A1star_A2star = 1.0 # isentropic
A1_A1star = (1.0/mach_1) * (
(1 + 0.5*(gamma-1)*mach_1**2) / ((gamma + 1)/2)
)**((gamma+1) / (2*(gamma-1)))
print(f'A1/A1^* = {A1_A1star:.4f}')
A2_A2star = A2_A1 * A1_A1star * A1star_A2star
print(f'A2/A2star = {A2_A2star:.4f}')
###Output
A2/A2star = 3.3496
###Markdown
We can then find $M_2$, because $\frac{A_2}{A_2*} = f(M_2)$. Our options are to use the $\gamma = 1.4$ tables and interpolate, or solve the associated equation numerically. Option 1: using tablesWe can find in the tables that:* at $M=0.17$, $A/A^* = 3.46351$* at $M = 0.18$, $A/A^* = 3.27793$and interpolate to find the precise $M_2$:
###Code
machs = np.array([0.17, 0.18])
areas = np.array([3.46351, 3.27793])
mach_2 = (
machs[0] * (areas[1] - A2_A2star) + machs[1] * (A2_A2star - areas[0])
) / (areas[1] - areas[0])
print(f'M2 = {mach_2:.4f}')
###Output
M2 = 0.1761
###Markdown
This is probably sufficient, but we could get a more-accurate result by interpolating using more points and using the `numpy.interp()` function:
###Code
machs = np.array([0.15, 0.16, 0.17, 0.18, 0.19])
areas = np.array([3.91034, 3.67274, 3.46351, 3.27793, 3.11226])
mach_2 = np.interp(A2_A2star, areas[::-1], machs[::-1])
print(f'M2 = {mach_2:.4f}')
###Output
M2 = 0.1761
###Markdown
Note that we have to reverse the order of the values, since `interp` expects the x-values to be increasing.Also, we could easily generate these values ourselves for a different value of $\gamma$, but it is likelyeasier to just solve the equation directly in that case Option 2: solving the equationAlternately, we can solve the $\frac{A_2}{A_2*} = f(M_2, \gamma)$ equation directly using `scipy.optimize.root_scalar`. We need to give the function initial guesses `x0` and `x1` that are subsonic, to ensure we get a subsonic solution.
###Code
def area_function(mach, gamma, area_ratio):
'''Function for area ratio, solving for M2'''
return (
area_ratio - (
(1.0/mach) * ((1 + 0.5*(gamma-1)*mach**2) /
((gamma + 1)/2))**((gamma+1) / (2*(gamma-1)))
)
)
sol = root_scalar(area_function, args=(gamma, A2_A2star), x0=0.1, x1=0.5)
print(f'M2 = {sol.root:.4f}')
###Output
M2 = 0.1760
###Markdown
```{warning}Make sure you check the output from root-finding functions, because they can converge on the wrong solution even with an appropriate initial guess. You might need to adjust the initial guess if you get a solution that is not consistent with the physics of the problem.``` Option 3: solving the full equationAs a third option, we could actually bypass the ratio approach and just numerically solve the full Equation {eq}`eq_area_ratio_loss` directly, using the known $ A_2/A_1 $, $ M_1 $, $ \gamma $, and $ \Delta s = 0 $!
###Code
def full_area_function(mach2, mach1, area_ratio, gamma=1.4, delta_s=0.0, R=287.0):
'''Function for full area ratio equation, solving for M2'''
# gamma, delta_s, and R have default values
return (
area_ratio - (mach1 / mach2) * (
(1 + 0.5*(gamma-1)*mach2**2) / (1 + 0.5*(gamma-1)*mach1**2)
)**((gamma+1) / (2*(gamma-1))) * np.exp(-delta_s / R)
)
sol = root_scalar(full_area_function, args=(mach_1, A2_A1, gamma, 0.0), x0=0.1, x1=0.5)
print(f'M2 = {sol.root:.4f}')
###Output
M2 = 0.1760
|
SMC-RL.ipynb | ###Markdown
manipulator
###Code
def cal_H(theta2):
c2 = np.cos(theta2)
H = np.array([[1/3*m1*l1*l1+m2*l1*l1+1/3*m2*l2*l2+m2*l1*l2*c2, 1/3*m2*l2*l2+0.5*m2*l1*l2*c2],
[1/3*m2*l2*l2+1/2*m2*l1*l2*c2, 1/3*m2*l2*l2]])
return H
def cal_C(q1_dot, q2_dot, theta2):
c2 = np.cos(theta2)
s2 = np.sin(theta2)
h = -0.5*m2*l1*l2*s2
C = np.array([[h*q2_dot, h*(q1_dot+q2_dot)],
[-h*q1_dot, 0]])
return C
def cal_G(theta1, theta2):
c1 = np.cos(theta1)
c2 = np.cos(theta2)
c12 = np.cos(theta1+theta2)
g = 9.8
G = np.array([[0.5*m1*g*l1*c1+m2*g*l1*c1+0.5*m2*g*l2*c12],
[0.5*m2*g*l2*c12]])
return G
###Output
_____no_output_____
###Markdown
initialization
###Code
q1_dot_list = []
q2_dot_list = []
q1_list = []
q2_list = []
torque1_list = []
torque2_list = []
s1 = []
s2 = []
time_list = []
error_list = []
s1_rl_list = []
s2_rl_list = []
critic_l = []
# param
m1 = 1
m2 = 2
l1 = 2
l2 = 2
g = 9.8
# state
q = np.array([[0.01], [0.01]], dtype=np.float32)
q_dot = np.array([[0], [0]], dtype=np.float32)
q_dot_dot = np.array([[0], [0]], dtype=np.float32)
# target
q_d = np.array([[1], [2]], dtype=np.float32)
q_d_d = np.array([[0], [0]], dtype=np.float32)
q_d_dd= np.array([[0], [0]], dtype=np.float32)
# perturbance estimation
F_est = 0
F_est_dot = 0
gama = 10
F_ = 1
c1 = 2
g = 1.8
p = 0.8
# time interval
del_t = 0.001
############ RL param ##############
Wck = np.ones((degree, 1)) * 0.1
Wak = np.ones((degree, 1)) * 0.1
Gk = np.zeros((degree, 1))
rk = 0
Pk = np.random.random((degree, 1))
Pk = np.ones((degree, 1)) * 0
mu = 0.5
beta = 0.5
ka = 0.001
#lr_c = 0.08
#lr_a = 0.4
lr_c = 0.08
lr_a = 0.3
epsilon = 0.1
yak = 0
psi_c_prev = np.zeros((degree, 1))
psi_a_prev = np.zeros((degree, 1))
psi_c = np.zeros((degree, 1))
psi_a = np.zeros((degree, 1))
###### 2nd dof #######
Wck_ = np.ones((degree, 1)) * 0.1
Wak_ = np.ones((degree, 1)) * 0.1
Gk_ = np.zeros((degree, 1))
rk_ = 0
Pk_ = np.random.random((degree, 1))
Pk_ = np.ones((degree, 1)) * 0
epsilon_ = 0.1
yak_ = 0
psi_c_prev_ = np.zeros((degree, 1))
psi_a_prev_ = np.zeros((degree, 1))
psi_c_ = np.zeros((degree, 1))
psi_a_ = np.zeros((degree, 1))
# record last step
psi_c_l = [psi_c, psi_c]
psi_a_l = [psi_a, psi_c]
rk_l = [rk, rk]
Wck_l = [Wck, Wck]
Wak_l = [Wak, Wak]
psi_c_l_ = [psi_c_, psi_c_]
psi_a_l_ = [psi_a_, psi_c_]
rk_l_ = [rk_, rk_]
Wck_l_ = [Wck_, Wck_]
Wak_l_ = [Wak_, Wak_]
for i in range(15000):
time_list.append(i * del_t)
# 正弦目标
q_d = np.ones((2,1))
q_d[0][0] *= np.sin(2*i*del_t)
q_d[1][0] *= np.sin(3*i*del_t)
q_d_d = np.ones((2,1))
q_d_d[0][0] *= 2 * np.cos(2*i*del_t)
q_d_d[1][0] *= 3 * np.cos(3*i*del_t)
q_d_dd = - np.ones((2,1))
q_d_dd[0][0] *= 4 * np.sin(2*i*del_t)
q_d_dd[1][0] *= 9 * np.sin(3*i*del_t)
z1_dot = q_dot - q_d_d
z1 = q - q_d
z2 = q_dot + c1 * z1 - q_d_d
delt_u = 1
########### 1DOF #################
'''RL'''
# update
# s = z1_dot + 10 * z1
# s = z2 + np.sign(z1) * abs(z1) ** g
s = (z1_dot + 6 * z1) / 2
# RBF network result
psi_c = kernel_func(centers, s.T).reshape((-1,1))
psi_a = kernel_func(centers, s.T).reshape((-1,1))
# rk and TD error update
if(abs(s[0][0])<0.5):
rk = -(1 / (1+np.exp(-(s[0][0]))) - 0.5) * 1000
else:
rk = -(1 / (1+np.exp(-(s[0][0]))) - 0.5) * 50
# rk = -s[0][0] * 5
tde = rk + beta * Wck.T.dot(psi_c) - Wck.T.dot(psi_c_l[-1])
Q_prev = Wck.T.dot(psi_c_l[-1])
#Wck = Wck - lr_c*psi_c_l[-1].dot( (Q_prev+beta**(i+1)*rk_l[-1]-0.5* Wck_l[-2].T.dot(psi_c_l[-2]).T ) )
#Gk = Pk.dot(np.linalg.inv(mu + (psi_c_l[-1].T - beta*psi_c.T).dot(Pk)))
#Pk = (Pk - Gk.dot(psi_c_l[-1].T - beta*psi_c.T).dot(Pk)) / mu
#Wck = Wck + Gk.dot(rk_l[-1] + (psi_c_l[-1].T - beta*psi_c.T).dot(Wck))
Wck = Wck + lr_c*tde*psi_c_l[-1]
#Wck = Wck - lr_c * tde * (psi_c_l[-1] - beta*psi_c)
Wak = Wak + lr_a*tde*psi_a_l[-1]
yak = psi_a_l[-1].T.dot(Wak_l[-1])
# print(yak)
# print(rk_l[-1] + (psi_c_l[-1].T - gama_*psi_c.T).dot(Wck))
# print(rk)
critic_l.append(float(Q_prev))
# record last step
psi_c_l.append(psi_c)
psi_a_l.append(psi_a)
rk_l.append(rk)
Wck_l.append(Wck)
Wak_l.append(Wak)
del(psi_c_l[0])
del(psi_a_l[0])
del(rk_l[0])
del(Wck_l[0])
del(Wak_l[0])
############# 2DOF ##############################
#s = z2 + np.sign(z1) * abs(z1) ** g
# s = z1_dot + 10 * z1
psi_c_ = kernel_func(centers, s.T).reshape((-1,1))
psi_a_ = kernel_func(centers, s.T).reshape((-1,1))
if(abs(s[1][0])<0.5):
rk_ = -(1 / (1+np.exp(-(s[1][0]))) - 0.5) * 1000
else:
rk_ = -(1 / (1+np.exp(-(s[1][0]))) - 0.5) * 50
# rk_ = -s[1][0] * 5
tde_ = rk_ + beta * Wck_.T.dot(psi_c_) - Wck_.T.dot(psi_c_l_[-1])
Q_prev_ = Wck_.T.dot(psi_c_l_[-1])
#Wck_ = Wck_ - lr_c*psi_c_l_[-1].dot( (Q_prev_+beta**(i+1)*rk_l_[-1]-0.5* Wck_l_[-2].T.dot(psi_c_l_[-2]).T ) )
#Gk_ = Pk_.dot(np.linalg.inv(mu + (psi_c_l_[-1].T - beta*psi_c_.T).dot(Pk_)))
#Pk_ = (Pk_ - Gk_.dot(psi_c_l_[-1].T - beta*psi_c_.T).dot(Pk_)) / mu
#Wck_ = Wck_ + Gk_.dot(rk_l_[-1] + (psi_c_l_[-1].T - beta*psi_c_.T).dot(Wck_))
Wck_ = Wck_ + lr_c*tde_*psi_c_l_[-1]
# Wck_ = Wck_ - lr_c * tde_ * (psi_c_l[-1] - beta*psi_c)
Wak_ = Wak_ + lr_a * tde_ *psi_a_l_[-1]
yak_ = psi_a_l_[-1].T.dot(Wak_l_[-1])
#print(yak)
#print(rk)
#print(Q_prev_)
#print(rk)
# record last step
psi_c_l_.append(psi_c_)
psi_a_l_.append(psi_a_)
rk_l_.append(rk_)
Wck_l_.append(Wck_)
Wak_l_.append(Wak_)
del(psi_c_l_[0])
del(psi_a_l_[0])
del(rk_l_[0])
del(Wck_l_[0])
del(Wak_l_[0])
############### finite time control ############################
s = z2 + np.sign(z1) * abs(z1) ** g
# control law
torque = cal_C(q_dot[0][0], q_dot[1][0], q[1][0]).dot(q_dot) + cal_G(q[0][0], q[1][0]) + cal_H(q[1][0]).dot(- c1*z1_dot + q_d_dd - g*abs(z1)**(g-1)*(z2-c1*z1) -0.5*np.sign(s)*abs(s)**p)
torque[0][0] += yak
torque[1][0] += yak_
torque_a = torque - cal_G(q[0][0], q[1][0]) - cal_C(q_dot[0][0], q_dot[1][0], q[1][0]).dot(q_dot)
#torque_a[0][0] = yak
#torque_a[1][0] = yak_
# 随机扰动
# error = random.random() * F_
# 正弦扰动
error = F_ * np.sin(10 *i*del_t)
q_dot_dot = np.linalg.inv(cal_H(q[1][0])).dot(torque_a) + error
q_dot = q_dot + q_dot_dot * del_t
q = q + q_dot * del_t + 0.5 * q_dot_dot * del_t**2
'''
while(q[0][0] > np.pi):
q[0][0] -= np.pi*2
while(q[0][0] < -np.pi):
q[0][0] += np.pi*2
while(q[1][0] > np.pi):
q[1][0] -= np.pi*2
while(q[1][0] < -np.pi):
q[1][0] += np.pi*2
'''
# record for visualization
s1.append(s[0][0])
s2.append(s[1][0])
q1_dot_list.append(q_dot[0][0] - q_d_d[0][0])
q2_dot_list.append(q_dot[1][0] - q_d_d[1][0])
q1_list.append(q[0][0] - q_d[0][0])
q2_list.append(q[1][0] - q_d[1][0])
#torque1_list.append(torque[0][0])
#torque2_list.append(torque[1][0])
torque1_list.append(torque[0][0])
torque2_list.append(torque[1][0])
error_list.append(error)
plt.suptitle('Sliding mode variable')
plt.plot(time_list, s1, label="s1")
plt.plot(time_list, s2, label="s2")
plt.xlabel('time/s')
plt.legend()
plt.suptitle('Sliding mode variable')
plt.plot(time_list[6000:8000], s1[6000:8000], label="s1")
plt.plot(time_list[6000:8000], s2[6000:8000], label="s2")
plt.xlabel('time/s')
plt.legend()
plt.suptitle('Torque')
plt.plot(time_list, torque1_list, label="torque1")
plt.plot(time_list, torque2_list, label="torque2")
plt.xlabel('time/s')
plt.ylabel('N * m')
plt.legend()
plt.suptitle('Angle error')
plt.plot(time_list, q1_list, label="q1")
plt.plot(time_list, q2_list, label="q2")
plt.xlabel('time/s')
plt.ylabel('rad')
plt.legend()
plt.suptitle('Velocity Error')
plt.plot(time_list, q1_dot_list, label="v1")
plt.plot(time_list, q2_dot_list, label="v2")
plt.xlabel('time/s')
plt.ylabel('rad / s')
plt.legend()
Wak = np.array([[ 0.24678735],
[ 0.85716651],
[ 1.8723371 ],
[ 2.39119344],
[ 6.08267974],
[ 5.72840131],
[ 6.97791643],
[ 8.79259755],
[ 7.21137203],
[ 6.02266684],
[ 5.68467737],
[ 1.365817 ],
[ 0.51206218],
[ 0.99034099],
[ 2.04717139],
[ 1.16762794],
[ 0.77703727],
[ 2.21439595],
[ 2.58759157],
[ 0.13199178],
[ 0.50525658],
[ 0.10123941],
[ 0.12144108],
[ 2.61599235],
[ 1.80393352],
[ 0.60821972],
[ 1.35743787],
[ 2.49314414],
[ 1.28422844],
[ 0.10143086],
[ 0.10001776],
[ 0.10048394],
[ 0.28226557],
[ 3.9530646 ],
[ 2.87739659],
[ 5.72650912],
[ 4.50027707],
[ 2.5360158 ],
[ 0.17261195],
[ 0.10000534],
[ 0.10636847],
[ 0.37457789],
[ 1.0661917 ],
[ 2.76675044],
[ 4.48841178],
[ 22.27017029],
[ 7.43060271],
[ 0.7342019 ],
[ 0.10478479],
[ 0.12529997],
[ -1.20246449],
[ 0.12367804],
[ -0.06542138],
[ 0.23702512],
[ -2.87017727],
[ -0.1680608 ],
[ 2.40673815],
[ 0.15820663],
[ 0.20179351],
[ -0.04579072],
[ -0.99114556],
[ 0.06229774],
[ 0.07476644],
[ 0.04696667],
[ -8.19773824],
[-18.26050254],
[ -5.81803674],
[ -0.17220573],
[ -0.742324 ],
[ -1.56500377],
[ 0.0995916 ],
[ 0.09999604],
[ 0.08762429],
[ -1.03507469],
[ -2.23099195],
[ -1.98980936],
[ -3.69489544],
[ -1.20326675],
[ -0.27729919],
[ 0.04289125],
[ 0.09999934],
[ 0.09374684],
[ -0.69766517],
[ -1.27728597],
[ -0.37762279],
[ -1.66482716],
[ -0.40628989],
[ 0.07493637],
[ 0.05181019],
[ -5.42552949],
[ -0.78852621],
[ -5.93822089],
[ -6.8384097 ],
[ -5.29107124],
[ -5.16989576],
[ -2.74404926],
[ -3.05689778],
[ -2.62252374],
[ -2.16019109],
[ -4.79318544]])
Wak_ = np.array([[ 0.24683235],
[ 0.85639545],
[ 1.85920411],
[ 2.26775074],
[ 4.17320015],
[ 0.09365913],
[ -5.25978656],
[ -8.59182726],
[ -7.24178892],
[ -6.06178831],
[ 5.68919401],
[ 1.36520982],
[ 0.51036208],
[ 0.90660963],
[ 1.73000911],
[ 0.01838571],
[ -0.355761 ],
[ -2.02231696],
[ -2.47446281],
[ 0.06685033],
[ 0.50568808],
[ 0.10123924],
[ 0.12085235],
[ 2.50016778],
[ 1.37404207],
[ 0.2361642 ],
[ -1.03307102],
[ -2.29792223],
[ -1.12285626],
[ 0.09852032],
[ 0.10001951],
[ 0.10076227],
[ 0.296491 ],
[ 4.00727711],
[ 1.85955226],
[ -0.10526803],
[ -3.18398178],
[ -2.41134167],
[ 0.02516609],
[ 0.09999062],
[ 0.1516143 ],
[ 0.66358695],
[ 1.39058199],
[ 3.56287698],
[ 4.12227208],
[ -6.28272838],
[ -6.98588153],
[ -0.63333941],
[ 0.04647578],
[ -0.12899549],
[ 4.27551117],
[ 1.46353239],
[ 0.97607082],
[ 1.87240991],
[ 14.7106392 ],
[ -0.05981898],
[-11.21085251],
[ -1.5036484 ],
[ -1.17926903],
[ -5.66006239],
[ 2.34280974],
[ 0.22323695],
[ 0.19718592],
[ 0.20926075],
[ 6.80353398],
[ 3.50591033],
[ -4.56750367],
[ -0.19377599],
[ -0.87306176],
[ -2.42186895],
[ 0.10079321],
[ 0.10000914],
[ 0.11163943],
[ 1.13758075],
[ 1.65831024],
[ 0.61910768],
[ -3.06419456],
[ -1.19551663],
[ -0.31515482],
[ 0.0354998 ],
[ 0.10000063],
[ 0.10599075],
[ 0.85987951],
[ 1.39777375],
[ 0.33832883],
[ 0.12076093],
[ -0.18406936],
[ 0.07652351],
[ 0.05197785],
[ -5.4429498 ],
[ 0.95342296],
[ 5.89629551],
[ 6.73176374],
[ 5.0426266 ],
[ 3.89182487],
[ 0.1219508 ],
[ -2.23867545],
[ -2.49556932],
[ -2.14659552],
[ -4.79672839]])
###Output
_____no_output_____ |
notebooks/advanced/Item_Level_Explainability/Item_Level_Explanability.ipynb | ###Markdown
Item Level Explainability - Amazon Forecast Our goal is to train a forecasting model with Amazon Forecast and explain the resultant model in order to understand how different features are impacting the predictions using Forecast Explainability.Explainability helps you better understand how the attributes in your datasets impact your forecasts. Amazon Forecast uses a metric called Impact scores to quantify the relative impact of each attribute and determine whether they increase or decrease forecast values.To enable Forecast Explainability, your predictor must include at least one of the following: related time series, item metadata, or additional datasets like Holidays and the Weather Index.CreateExplainability accepts either a Predictor ARN or Forecast ARN. To receive aggregated Impact scores for all time series and time points in your datasets, provide a Predictor ARN. To receive Impact scores for specific time series and time points, provide a Forecast ARN.To do this, we will predict the order quantity for 20 musical instruments for US stores belonging to MyMusicCompany Inc, with monthly frequency for a 12 month forecast horizon. Time-series forecasting is important to avoid the costs related to under and over forecasting, in this case specifically for order quantities for different musical instruments. The data includes dates, instrument models and order quantities. The data contains related time-varying features including Loss Rate which represents items that get damaged during transportation, and Customer Request, which represents the number of customers on the wait list for an item. The data contains one static feature, Model Type, which represents the category the Model Id belongs to. We will train our model with the built-in holidays data provided by Amazon Forecast. We will then examine how the features in the data impact the order quantity using Explainability. Note that the impact scores, including those shown in this notebook, may differ between jobs due to some inherent randonmess in how impact scores are computed.Note: the data used in this notebook is a synthetic dataset generated for the purposes of educating you on how to use the feature.**This notebook covers generating explainability for forecasting models through Amazon Forecast.** See blog announcement understand drivers that influence yourforecasts with explainability impact scores in AmazonForecast. Table of Contents* Step 0: [Setting up](setup)* Step 1: [Importing the Data into Forecast](import) * Step 1a: [Creating a Dataset Group](createDSG) * Step 1b: [Creating a Target Dataset](targetDS) * Step 1c: [Creating an RTS Dataset](RTSDS) * Step 1d: [Creating an IM Dataset](IMDS) * Step 1e: [Update the Dataset Group](updateDSG) * Step 1f: [Creating a Target Time Series Dataset Import Job](targetImport) * Step 1g: [Creating a Related Time Series Dataset Import Job](RTSImport) * Step 1h: [Creating an Item Metadata Import Job](IMImport)* Step 2a: [Train an AutoPredictor](AutoPredictor)* Step 2b: [Export the model-level explainability](export)* Step 2c: [Visualize the model-level explainability](visualize)* Step 3: [Create a Forecast](forecast)* Step 4a: [Create explainability for specific time-series](itemLevelExplainability)* Step 4b: [Create explainability export for specific time-series](itemLevelExplainabilityExport)* Step 4c: [Create explainability for specific time-series at time-points](itemAndTimePointLevelExplainability)* Step 4d: [Create explainability export for specific time-series at time-points](itemAndTimePointLevelExplainabilityExport)* Step 5: [Cleaning up your Resources](cleanup) Step 0: Setting up First let us setup Amazon ForecastThis section sets up the permissions and relevant endpoints.
###Code
import sys
import os
import shutil
import datetime
import pandas as pd
import numpy as np
# get region from boto3
import boto3
REGION = boto3.Session().region_name
# importing forecast notebook utility from notebooks/common directory
sys.path.insert( 0, os.path.abspath("../../common") )
import util
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (15.0, 5.0)
###Output
_____no_output_____
###Markdown
First, let's define a helper function.This function will make it easier to read in the exported files created as part of an explaiability export into a single pandas dataframe. We'll use this later in the notebook.
###Code
def read_explainability_export(BUCKET_NAME, s3_path):
"""Read explainability export files
Inputs:
BUCKET_NAME = S3 bucket name
s3_path = S3 path to export files
, everything after "s3://BUCKET_NAME/" in S3 URI path to your files
Return: Pandas dataframe with all files concatenated row-wise
"""
# set s3 path
s3 = boto3.resource('s3')
s3_bucket = boto3.resource('s3').Bucket(BUCKET_NAME)
s3_depth = s3_path.split("/")
s3_depth = len(s3_depth) - 1
# set local path
local_write_path = "explainability_exports"
if (os.path.exists(local_write_path) and os.path.isdir(local_write_path)):
shutil.rmtree('explainability_exports')
if not(os.path.exists(local_write_path) and os.path.isdir(local_write_path)):
os.makedirs(local_write_path)
# concat part files
part_filename = ""
part_files = list(s3_bucket.objects.filter(Prefix=s3_path))
print(f"Number .part files found: {len(part_files)}")
for file in part_files:
# There will be a collection of CSVs, modify this to go get them all
if "csv" in file.key:
part_filename = file.key.split('/')[s3_depth]
window_object = s3.Object(BUCKET_NAME, file.key)
file_size = window_object.content_length
if file_size > 0:
s3.Bucket(BUCKET_NAME).download_file(file.key, local_write_path+"/"+part_filename)
# Read from local dir and combine all the part files
temp_dfs = []
for entry in os.listdir(local_write_path):
if os.path.isfile(os.path.join(local_write_path, entry)):
df = pd.read_csv(os.path.join(local_write_path, entry), index_col=None, header=0)
temp_dfs.append(df)
# Return assembled .part files as pandas Dataframe
fcst_df = pd.concat(temp_dfs, axis=0, ignore_index=True, sort=False)
return fcst_df
###Output
_____no_output_____
###Markdown
Configure the S3 bucket name and region name for this lesson.- If you don't have an S3 bucket, create it first on S3.- Although we have set the region to us-west-2 as a default value below, you can choose any of the regions that the service is available in.
###Code
bucket_name = input("\nEnter S3 bucket name for uploading the data:")
default_region = REGION
REGION = input(f"region [enter to accept default]: {default_region} ") or default_region
###Output
_____no_output_____
###Markdown
Connect API session
###Code
session = boto3.Session(region_name=REGION)
forecast = session.client(service_name='forecast')
###Output
_____no_output_____
###Markdown
Create the role to provide to Amazon Forecast
###Code
role_name = "ForecastNotebookRole-Explainability"
print(f"Creating Role {role_name} ...")
default_role = util.get_or_create_iam_role( role_name = role_name )
role_arn = default_role
print(f"Success! Created role arn = {role_arn.split('/')[1]}")
print(role_arn)
###Output
_____no_output_____
###Markdown
Verify the steps above were succesful by calling list_predictors()
###Code
forecast.list_predictors()
###Output
_____no_output_____
###Markdown
Step 1. Importing the DataIn this step, we will create a **Dataset** and **Import** the dataset from S3 to Amazon Forecast. To train a Predictor we will need a **DatasetGroup** that groups the input **Datasets**. So, we will end this step by creating a **DatasetGroup** with the imported **Dataset**. Define a dataset group name and version number for naming purposes
###Code
project = "explainability_notebook"
idx = 1
###Output
_____no_output_____
###Markdown
Step 1a. Creating a Dataset GroupFirst let's create a dataset group and then update it later to add our datasets.
###Code
dataset_group = f"{project}_{idx}"
dataset_arns = []
create_dataset_group_response = forecast.create_dataset_group(
Domain="CUSTOM",
DatasetGroupName=dataset_group,
DatasetArns=dataset_arns)
###Output
_____no_output_____
###Markdown
Below, we specify key input data and forecast parameters.The forecast frequency for this data is weekly.The forecast horizon for this data is 12 weeks, which is about 3 months.
###Code
freq = "M"
forecast_horizon = 12
timestamp_format = "yyyy-MM-dd HH:mm:ss"
delimiter = ','
print(f'Creating dataset group {dataset_group}')
dataset_group_arn = create_dataset_group_response['DatasetGroupArn']
forecast.describe_dataset_group(DatasetGroupArn=dataset_group_arn)
###Output
_____no_output_____
###Markdown
Step 1b. Creating a Target Time Series (TTS) DatasetIn this example, we will define a target time series. This is a required dataset to use the service.
###Code
ts_dataset_name = f"{project}_tts_{idx}"
print(ts_dataset_name)
###Output
_____no_output_____
###Markdown
Next, we specify the schema of our dataset below. Make sure the order of the attributes (columns) matches the raw data in the files. We follow the same three attribute format as the above example.
###Code
ts_schema_val = [
{"AttributeName": "timestamp", "AttributeType": "timestamp"},
{"AttributeName": "item_id", "AttributeType": "string"},
{"AttributeName": "target_value", "AttributeType": "float"}]
ts_schema = {"Attributes": ts_schema_val}
print(f'Creating target dataset {ts_dataset_name}')
response = forecast.create_dataset(
Domain="CUSTOM",
DatasetType='TARGET_TIME_SERIES',
DatasetName=ts_dataset_name,
DataFrequency=freq,
Schema=ts_schema
)
ts_dataset_arn = response['DatasetArn']
forecast.describe_dataset(DatasetArn=ts_dataset_arn)
###Output
_____no_output_____
###Markdown
Step 1c. Creating an Related Time Series (RTS) DatasetIn this example, we will define a related time series dataset. The columns in the RTS are attributes whose impact can be explained.
###Code
rts_dataset_name = f"{project}_rts_{idx}"
print(rts_dataset_name)
rts_schema_val = [
{"AttributeName": "timestamp", "AttributeType": "timestamp"},
{"AttributeName": "item_id", "AttributeType": "string"},
{"AttributeName": "Loss_Rate", "AttributeType": "float"},
{"AttributeName": "Customer_Request", "AttributeType": "float"}]
rts_schema = {"Attributes": rts_schema_val}
print(f'Creating RTS dataset {rts_dataset_name}')
response = forecast.create_dataset(
Domain="CUSTOM",
DatasetType='RELATED_TIME_SERIES',
DataFrequency=freq,
DatasetName=rts_dataset_name,
Schema=rts_schema
)
rts_dataset_arn = response['DatasetArn']
forecast.describe_dataset(DatasetArn=rts_dataset_arn)
###Output
_____no_output_____
###Markdown
Step 1d. Creating an Item Metadata (IM) DatasetIn this example, we will define an Item Metadata dataset. This will be a feature whose impact can be explained.
###Code
im_dataset_name = f"{project}_im_{idx}"
print(im_dataset_name)
im_schema_val = [
{"AttributeName": "item_id", "AttributeType": "string"},
{"AttributeName": "Model_Type", "AttributeType": "string"}]
im_schema = {"Attributes": im_schema_val}
print(f'Creating IM dataset {im_dataset_name}')
response = forecast.create_dataset(
Domain="CUSTOM",
DatasetType='ITEM_METADATA',
DatasetName=im_dataset_name,
Schema=im_schema
)
im_dataset_arn = response['DatasetArn']
forecast.describe_dataset(DatasetArn=im_dataset_arn)
###Output
_____no_output_____
###Markdown
Step 1e. Updating the dataset group with the datasets we createdYou can have multiple datasets under the same dataset group. Update it with the datasets we created before.
###Code
dataset_arns = []
dataset_arns.append(ts_dataset_arn)
dataset_arns.append(rts_dataset_arn)
dataset_arns.append(im_dataset_arn)
forecast.update_dataset_group(DatasetGroupArn=dataset_group_arn, DatasetArns=dataset_arns)
forecast.describe_dataset_group(DatasetGroupArn=dataset_group_arn)
###Output
_____no_output_____
###Markdown
Step 1f. Creating a Target Time Series Dataset Import Job Below, we save the Target Time Series to your bucket on S3, since Amazon Forecast expects to be able to import the data from S3.
###Code
local_file = "instrumentData/TTS.csv"
key = f"{project}/{local_file}"
boto3.Session().resource('s3').Bucket(bucket_name).Object(key).upload_file(local_file)
ts_s3_data_path = f"s3://{bucket_name}/{project}/{local_file}"
print(ts_s3_data_path)
ts_dataset_import_job_response = forecast.create_dataset_import_job(
DatasetImportJobName=dataset_group,
DatasetArn=ts_dataset_arn,
DataSource= {
"S3Config" : {
"Path": ts_s3_data_path,
"RoleArn": role_arn
}
},
TimestampFormat=timestamp_format
)
ts_dataset_import_job_arn=ts_dataset_import_job_response['DatasetImportJobArn']
status = util.wait(lambda: forecast.describe_dataset_import_job(DatasetImportJobArn=ts_dataset_import_job_arn))
assert status
###Output
_____no_output_____
###Markdown
Step 1g. Creating a Related Time Series Dataset Import JobBelow, we save the Related Time Series to your bucket on S3, since Amazon Forecast expects to be able to import the data from S3.
###Code
local_file = "instrumentData/RTS.csv"
key = f"{project}/{local_file}"
boto3.Session().resource('s3').Bucket(bucket_name).Object(key).upload_file(local_file)
rts_s3_data_path = f"s3://{bucket_name}/{project}/{local_file}"
print(rts_s3_data_path)
rts_dataset_import_job_response = forecast.create_dataset_import_job(
DatasetImportJobName=dataset_group,
DatasetArn=rts_dataset_arn,
DataSource= {
"S3Config" : {
"Path": rts_s3_data_path,
"RoleArn": role_arn
}
})
rts_dataset_import_job_arn=rts_dataset_import_job_response['DatasetImportJobArn']
status = util.wait(lambda: forecast.describe_dataset_import_job(DatasetImportJobArn=rts_dataset_import_job_arn))
assert status
###Output
_____no_output_____
###Markdown
Step 1h. Creating an Item Metadata Dataset Import JobBelow, we save the Item Metadata to your bucket on S3, since Amazon Forecast expects to be able to import the data from S3.
###Code
local_file = "instrumentData/IM.csv"
key = f"{project}/{local_file}"
boto3.Session().resource('s3').Bucket(bucket_name).Object(key).upload_file(local_file)
im_s3_data_path = f"s3://{bucket_name}/{project}/{local_file}"
print(im_s3_data_path)
im_dataset_import_job_response = forecast.create_dataset_import_job(
DatasetImportJobName=dataset_group,
DatasetArn=im_dataset_arn,
DataSource= {
"S3Config" : {
"Path": im_s3_data_path,
"RoleArn": role_arn
}
})
im_dataset_import_job_arn=im_dataset_import_job_response['DatasetImportJobArn']
status = util.wait(lambda: forecast.describe_dataset_import_job(DatasetImportJobArn=im_dataset_import_job_arn))
assert status
###Output
_____no_output_____
###Markdown
Step 2a. Train an AutoPredictor with RTS, IM and Holidays Next, we will train an AutoPredictor using the dataset group created in step 2 as well as US Holidays Explainability requires at least one dataset attribute other than the item_id and target_value attributes. So for the predictor we create, impact scores will be generated for RTS columns, IM and Holidays. You can create Explainability for all forecasts generated from an AutoPredictor. In addition to this, at AutoPredictor creation, you have the option to generate model-level explainability. We will enable this option for predictor creation by setting:```pythonExplainPredictor=True```
###Code
auto_predictor_name = f'holidays_instrument_orders_auto_predictor_{idx}'
print(f'[{auto_predictor_name}] Creating predictor {auto_predictor_name} ...')
create_predictor_response = forecast.create_auto_predictor(
PredictorName=auto_predictor_name,
ForecastHorizon=forecast_horizon,
ForecastFrequency="M",
DataConfig=
{"DatasetGroupArn":dataset_group_arn,
"AdditionalDatasets":
[
{"Name":"holiday",
"Configuration":
{"CountryCode":
["US"]
}
}
]
},
ExplainPredictor=True
)
predictor_arn = create_predictor_response['PredictorArn']
status = util.wait(lambda: forecast.describe_auto_predictor(PredictorArn=predictor_arn))
assert status
forecast.describe_auto_predictor(PredictorArn=predictor_arn)
###Output
_____no_output_____
###Markdown
When we created the AutoPredictor, we also created a model level explainability jobWe will wait for the explainability job to be Active, and then we can export it and view the results. Get the explainability arn from calling describe on the predictor.
###Code
auto_predictor_response = forecast.describe_auto_predictor(PredictorArn=predictor_arn)
explainability_model_level_arn = auto_predictor_response["ExplainabilityInfo"]["ExplainabilityArn"]
status = util.wait(lambda: forecast.describe_explainability(ExplainabilityArn=explainability_model_level_arn))
assert status
###Output
_____no_output_____
###Markdown
Now that the explainability is Active, we will export the results by creating an explainablity export Step 2b. Export the model-level explainability
###Code
explainability_export_name = f"{project}_explainability_export_model_level_{idx}"
explainability_export_destination = f"s3://{bucket_name}/{project}/{explainability_export_name}"
explainability_export_response = forecast.create_explainability_export(ExplainabilityExportName=explainability_export_name,
ExplainabilityArn=explainability_model_level_arn,
Destination=
{"S3Config":
{"Path": explainability_export_destination,
"RoleArn": role_arn}
}
)
explainability_export_model_level_arn = explainability_export_response['ExplainabilityExportArn']
status = util.wait(lambda: forecast.describe_explainability_export(ExplainabilityExportArn=explainability_export_model_level_arn))
assert status
forecast.describe_explainability_export(ExplainabilityExportArn=explainability_export_model_level_arn)
###Output
_____no_output_____
###Markdown
Now, let's load and view the data
###Code
export_data = read_explainability_export(bucket_name, project+"/"+explainability_export_name)
export_data.style.hide_index()
###Output
_____no_output_____
###Markdown
Impact scores come in two forms: Normalized impact scores and Raw impact scores. Raw impact scores are based on Shapley values and are not scaled or bounded. Normalized impact scores scale the raw scores to a value between -1 and 1 to make comparing scores within the Explainability job easier. Note that the impact scores, including those shown in this notebook, may differ between jobs due to some inherent randonmess in how impact scores are computed. Here we can see the aggregatd scores across time-series for features in the model. From the scores, Customer_Request has the highest impact driving up the forecasted values, as the normalized impact score is closest to 1.Loss_Rate has a lower impact then Customer_Request does.Of the features explained, Holiday_US has the lowest impact on the forecasted values, as the normalized impact score of is closest to 0 (no impact). It is important to note that Impact scores measure the relative impact of attributes, not the absolute impact. Therefore, Impact scores cannot be used to conclude whether particular attributes improve model accuracy. If an attribute has a low Impact score, that does not necessarily mean that it has a low impact on forecast values; it means that it has a lower impact on forecast values than other attributes used by the predictor. Step 2c. Visualize the model-level explainability  We can also view these results on the Amazon Forecast console.For more details about using the Forecast console to create and view explainabilities, see: https://aws.amazon.com/blogs/machine-learning/understand-drivers-that-influence-your-forecasts-with-explainability-impact-scores-in-amazon-forecast/ Step 3. Create Forecast
###Code
forecast_name = f"{project}_forecast_{idx}"
create_forecast_response = forecast.create_forecast(
ForecastName=forecast_name,
PredictorArn = predictor_arn
)
forecast_arn = create_forecast_response['ForecastArn']
status = util.wait(lambda: forecast.describe_forecast(ForecastArn=forecast_arn))
assert status
forecast.describe_forecast(ForecastArn=forecast_arn)
###Output
_____no_output_____
###Markdown
Step 4a. Create Explainability for specific time-series We examined the model-level explainability generated during AutoPredictor creation. Next we will generate explainability for a set of time-series of our choosing. To specify a list of time series, upload a CSV file identifying the time series by their item_id and dimension values. You can specify up to 50 time series. You must also define the attributes and attribute types of the time series in a schema.In this dataset, each time series is only defined by their item_id. We will load and view the item subset file stored locally.
###Code
item_subset_file = "InstrumentData/item_subset.csv"
item_subset_df = pd.read_csv(item_subset_file, names=['item_id'])
item_subset_df.style.hide_index()
###Output
_____no_output_____
###Markdown
Now save the local item susbet file to S3, as Forecast expects to read the file from S3.
###Code
key = f"{project}/InstrumentData/item_subset.csv"
boto3.Session().resource('s3').Bucket(bucket_name).Object(key).upload_file(item_subset_file)
item_subset_path = f"s3://{bucket_name}/{key}"
explainability_name = f"{project}_item_level_explainability_{idx}"
###Output
_____no_output_____
###Markdown
To create the explainability using this subset of time-series, configure the following datatypes:* ExplainabilityConfig - set values for TimeSeriesGranularity to “SPECIFIC” and TimePointGranularity to “ALL”.```pythonExplainabilityConfig={"TimeSeriesGranularity": "SPECIFIC", "TimePointGranularity": "ALL"}```* S3Config - set the values for “Path” to the S3 location of the CSV file and “RoleArn” to a role with access to the S3 bucket.```python"S3Config": {"Path": item_subset_path, "RoleArn": role_arn}```* Schema - define the “AttributeName” and “AttributeType” for item_id and the dimensions in the time series.```pythonSchema={"Attributes": [{"AttributeName": "item_id", "AttributeType": "string", "AttributeCategory": "item_id"} ] }```In order to view the explainability results on the console, we set EnableVisualiztion to True.```pythonEnableVisualization=True```
###Code
create_expainability_response=forecast.create_explainability(ExplainabilityName=explainability_name,
ResourceArn=forecast_arn,
ExplainabilityConfig={"TimeSeriesGranularity": "SPECIFIC", "TimePointGranularity": "ALL"},
DataSource=
{"S3Config":
{"Path": item_subset_path,
"RoleArn": role_arn}
},
Schema=
{"Attributes":
[{"AttributeName": "item_id",
"AttributeType": "string",
"AttributeCategory": "item_id"}
]
},
EnableVisualization=True)
explainability_item_level_arn = create_expainability_response['ExplainabilityArn']
status = util.wait(lambda: forecast.describe_explainability(ExplainabilityArn=explainability_item_level_arn))
assert status
forecast.describe_explainability(ExplainabilityArn=explainability_item_level_arn)
###Output
_____no_output_____
###Markdown
We can also the results on the Amazon Forecast console.For more details about using the Forecast console to create and view explainabilities, see: https://aws.amazon.com/blogs/machine-learning/understand-drivers-that-influence-your-forecasts-with-explainability-impact-scores-in-amazon-forecast/  From the dropdown, selecting the aggregate impact score across all time-series and time-points in the explainability job shows that Model_Type has an impact score of 0.361, meaning overall Model_Type moderately drives up the forecasted order quantites. Customer_Request (the number of customers on the waitlist for an item) has slightly less impact, with a score of 0.2608. Loss_Rate (the items damaged during transportation) has an impact of 0.1003, less than half of that of Customer_Request. Holidays has almost no impact, with a score of almost 0. Next, let's selected a specific time-series: Guitar_1  Guitar 1 across the timepoints explained in this job has a very high impact of 1 for Model_Type, meaning this attribute for Guitar 1 for the time-series in this job has a high impact that is increasing the forecasted values. Customer_Request has a much lower impact that still increases the forecast values.Holiday_US has no impact.Loss_Rate, represented by the bar in red, has an impact of 0.0419, but this impact decreases the forecasted values, driving them lower. You can also view scores for the items in this job at specific time-points, by selecting a time-point from the drop-down. Step 4b. Create Explainability export for specific time-series Forecast enables you to export a CSV file of Impact scores to an S3 location. These exports are more detailed than the Impact scores displayed in the console.If you use the “Specific time series” or “Specific time series and time points” scopes, Forecast will also export aggregated impact scores. Exports for the “Specific time series” scope include aggregated normalized scores for the specified time series, and exports for the “Specific time series and time points” scope include aggregated normalized scores for the specified time points.
###Code
explainability_export_name = f"{project}_item_level_explainability_export_{idx}"
explainability_export_destination = f"s3://{bucket_name}/{project}/{explainability_export_name}"
explainability_export_response = forecast.create_explainability_export(ExplainabilityExportName=explainability_export_name,
ExplainabilityArn=explainability_item_level_arn,
Destination=
{"S3Config":
{"Path": explainability_export_destination,
"RoleArn": role_arn}
})
explainability_export_item_level_arn = explainability_export_response['ExplainabilityExportArn']
status = util.wait(lambda: forecast.describe_explainability_export(ExplainabilityExportArn=explainability_export_item_level_arn))
assert status
forecast.describe_explainability_export(ExplainabilityExportArn=explainability_export_item_level_arn)
###Output
_____no_output_____
###Markdown
Now let's load and view the data
###Code
export_data = read_explainability_export(bucket_name, project+"/"+explainability_export_name)
###Output
_____no_output_____
###Markdown
The export for the “Specific time series” scope contains raw and normalized impact scores for the specified time series, as well as a normalized aggregated impact score for all specified time series. The are no raw impact scores for the aggregate because, like with the “Entire forecast” scope, the aggregated scores are already representative of all specified time series. Impact scores come in two forms: Normalized impact scores and Raw impact scores. Raw impact scores are based on Shapley values and are not scaled or bounded. Normalized impact scores scale the raw scores to a value between -1 and 1 to make comparing scores within the Explainability insight easier. The export file contains the aggregate impact scores across all time-series in the job across all time-points
###Code
export_data.loc[export_data['item_id'] == "Aggregate"]
###Output
_____no_output_____
###Markdown
The export file also contains the aggregate impact scores across all time-points for each time-series in the job
###Code
export_data.loc[export_data['timestamp'] == "Aggregate"].loc[export_data["item_id"] != "Aggregate"]
###Output
_____no_output_____
###Markdown
From the normalized impact score, we find the for Guitar_1 Model_Type has high impact score close to 1, meaning this attribute is driving up the forecasted values Guitar_1, as the maximum normalized impact score is 1.Guitar_4 on the other hand has a normalized impact score close to 1 for Customer_Request, meaning this attribut has a higher impact on Guitar_4 than Loss-Rate does.Guitars 2 and 3 are overall not impacted by these features, with aggregate impact scores of 0. Aggregating impact scoresForecast imposes a limit of 50 time-series that can be explained per explainabilty job.If you have more than 50 items to explain, the explainability for all the time-series can be generated in multiple batches. From there, if you want to generate an aggregate score for all time-series across explainability jobs, this can be done by taking an average of the noramlized impact scores for each feature. We will create one more explainability job, this time with differenent set of items and aggregate the results with those from the first batch.
###Code
second_item_subset_file = "InstrumentData/second_item_subset.csv"
second_item_subset_df = pd.read_csv(second_item_subset_file, names=['item_id'])
second_item_subset_df.style.hide_index()
###Output
_____no_output_____
###Markdown
Now, save the local item subset to S3
###Code
key = f"{project}/InstrumentData/second_item_subset.csv"
boto3.Session().resource('s3').Bucket(bucket_name).Object(key).upload_file(second_item_subset_file)
item_subset_path = f"s3://{bucket_name}/{key}"
explainability_name_second_batch = f"{project}_item_level_explainability_2nd_batch_{idx}"
create_expainability_response=forecast.create_explainability(ExplainabilityName=explainability_name_second_batch,
ResourceArn=forecast_arn,
ExplainabilityConfig={"TimeSeriesGranularity": "SPECIFIC", "TimePointGranularity": "ALL"},
DataSource=
{"S3Config":
{"Path": item_subset_path,
"RoleArn": role_arn}
},
Schema=
{"Attributes":
[{"AttributeName": "item_id",
"AttributeType": "string",
"AttributeCategory": "item_id"}
]
},
EnableVisualization=True)
explainability_item_level_batch2_arn = create_expainability_response['ExplainabilityArn']
status = util.wait(lambda: forecast.describe_explainability(ExplainabilityArn=explainability_item_level_batch2_arn))
assert status
forecast.describe_explainability(ExplainabilityArn=explainability_item_level_batch2_arn)
###Output
_____no_output_____
###Markdown
Now export the explainability results for the second batch of items
###Code
explainability_export_name_second_batch = f"{project}_item_level_export_batch2_{idx}"
explainability_export_destination = f"s3://{bucket_name}/{project}/{explainability_export_name_second_batch}"
explainability_export_response = forecast.create_explainability_export(ExplainabilityExportName=explainability_export_name_second_batch,
ExplainabilityArn=explainability_item_level_batch2_arn,
Destination=
{"S3Config":
{"Path": explainability_export_destination,
"RoleArn": role_arn}
})
explainability_export_item_level_batch2_arn = explainability_export_response['ExplainabilityExportArn']
status = util.wait(lambda: forecast.describe_explainability_export(ExplainabilityExportArn=explainability_export_item_level_batch2_arn))
assert status
forecast.describe_explainability_export(ExplainabilityExportArn=explainability_export_item_level_batch2_arn)
export_data_second_batch = read_explainability_export(bucket_name, project+"/"+explainability_export_name)
###Output
_____no_output_____
###Markdown
Concatenate the explainability export results
###Code
export_combined_data = pd.concat([export_data, export_data_second_batch])
###Output
_____no_output_____
###Markdown
Now that we have the results from both explainability jobs, we take an average across over the normalized impact scores for each feature in the data.
###Code
normalized_columns = ['Customer_Request-NormalizedImpactScore', 'Loss_Rate-NormalizedImpactScore','Model_Type-NormalizedImpactScore','Holiday_US-NormalizedImpactScore']
aggregate_impact_scores = pd.DataFrame(export_combined_data[normalized_columns].mean(), columns=['Mean'])
aggregate_impact_scores
###Output
_____no_output_____
###Markdown
Now we have the aggregate noramlized impact scores for all items in both batches. 4c. Create Explainability for specific time-series at specific time-points When you specify time points for Forecast Explainability, Amazon Forecast calculates Impact scores for attributes for that specific time range. You can specify up to 500 consecutive time points within the forecast horizon.The Impact scores can be interpreted as the impact attributes have on a specific time series at a given time. The modifications to create explainability for specific time-series at specific time-points are:* In ExplainabilityConfig, set values for TimeSeriesGranularity to “SPECIFIC” and TimePointGranularity to “SPECIFIC”.```pythonExplainabilityConfig={"TimeSeriesGranularity": "SPECIFIC", "TimePointGranularity": "SPECIFIC"}```* Provide a startDateTime and EndDateTime in the request. Impact scores will be generated for all time-points between the startDateTime and EndDateTime. For example:```pythonEndDateTime="2022-11-30T09:00:00",StartDateTime="2022-01-30T09:00:00"``` First, upload the item subset to S3 and set the explainability name
###Code
key = f"{project}/InstrumentData/second_item_subset.csv"
boto3.Session().resource('s3').Bucket(bucket_name).Object(key).upload_file(second_item_subset_file)
item_subset_path = f"s3://{bucket_name}/{key}"
explainability_name = f"{project}_item_timepoint_level_explainability_{idx}"
###Output
_____no_output_____
###Markdown
Now create the explainability
###Code
create_expainability_response=forecast.create_explainability(ExplainabilityName=explainability_name,
ResourceArn=forecast_arn,
ExplainabilityConfig={"TimeSeriesGranularity": "SPECIFIC",
"TimePointGranularity": "SPECIFIC"},
DataSource=
{"S3Config":
{"Path": item_subset_path,
"RoleArn": role_arn}
},
Schema=
{"Attributes":
[
{"AttributeName": "item_id",
"AttributeType": "string",
"AttributeCategory": "item_id"}
]
},
EndDateTime="2022-11-30T09:00:00",
StartDateTime="2022-01-30T09:00:00",
EnableVisualization=True)
explainability_item_and_timepoint_level_arn = create_expainability_response['ExplainabilityArn']
status = util.wait(lambda: forecast.describe_explainability(ExplainabilityArn=explainability_item_and_timepoint_level_arn))
assert status
forecast.describe_explainability(ExplainabilityArn=explainability_item_and_timepoint_level_arn)
###Output
_____no_output_____
###Markdown
Now that the explainability job is Active, we can view the results on the Forecast console.For more details about using the Forecast console to create and view explainabilities, see: https://aws.amazon.com/blogs/machine-learning/understand-drivers-that-influence-your-forecasts-with-explainability-impact-scores-in-amazon-forecast/ From the console, let's look specifically at Guitar_5, by selecting this item from the drop-down. We'll compare the scores for Guitar_5 at two different time-points, to see how the impact scores can change for each forecasted time-point.  The forecasted value for Guitar_5 on Jan 30th 2022 is highly impacted by Customer_Request, which has a normalized impact score of 1. Holiday_US has the next highest impact score of 0.3789, followed by Loss_Rate and Model_Type. Now, let's select the next time-point in the forecast horizon for Guitar_5, on Feb 28th 2022.  For the same Guitar one month later, the Customer_Request impact score changes from 1 to 0.8054.The impact of Holidays_US drops from 0.3739 in January down to 0 (no impact) in February.Drilling down to specific time-points can paint a more detailed picture of how each attribute in the data is impacting each item over time. You still have the option of the viewing the aggregate results for a specific item across time-points or for all items in the explainability job across all time-points by selection 'Aggregate' and 'All' from the drop-down. Step 4d. Create Explainability export for specific time-series at specific time-points
###Code
explainability_export_name = f"{project}_item_and_timepoints_level_export_{idx}"
explainability_export_destination = f"s3://{bucket_name}/{project}/{explainability_export_name}"
explainability_export_response = forecast.create_explainability_export(ExplainabilityExportName=explainability_export_name,
ExplainabilityArn=explainability_item_and_timepoint_level_arn,
Destination=
{"S3Config":
{"Path": explainability_export_destination,
"RoleArn": role_arn}
}
)
explainability_export_item_and_timepoint_level_arn = explainability_export_response['ExplainabilityExportArn']
status = util.wait(lambda: forecast.describe_explainability_export(ExplainabilityExportArn=explainability_export_item_and_timepoint_level_arn))
assert status
forecast.describe_explainability_export(ExplainabilityExportArn=explainability_export_item_and_timepoint_level_arn)
export_data_specific_items_and_time_points = read_explainability_export(bucket_name, project+"/"+explainability_export_name)
###Output
_____no_output_____
###Markdown
The export for the “Specific time series and time points” scope contains raw and normalized impact scores for the specified time series and time points, as well as normalized and raw aggregated impact scores for all specified time points. We'll take a look at the results for a specific item, Guitar_5
###Code
export_data_specific_items_and_time_points
export_data.loc[export_data['item_id'] == "Guitar_5"]
###Output
_____no_output_____
###Markdown
Step 5. Cleaning up your Resources Once we have completed the above steps, we can start to cleanup the resources we created. All delete jobs, except for `delete_dataset_group` are asynchronous, so we have added the helpful `wait_till_delete` function. Resource Limits documented here. If you want to clean up all the resources generated in this notebook, uncomment the lines in the cells below Delete explainability exports:
###Code
#util.wait_till_delete(lambda: forecast.delete_explainability_export(ExplainabilityExportArn = explainability_export_model_level_arn)
#util.wait_till_delete(lambda: forecast.delete_explainability_export(ExplainabilityExportArn = explainability_export_item_level_arn)
#util.wait_till_delete(lambda: forecast.delete_explainability_export(ExplainabilityExportArn = explainability_export_item_level_batch2_arn)
#util.wait_till_delete(lambda: forecast.delete_explainability_export(ExplainabilityExportArn = explainability_export_item_and_timepoint_level_arn))
###Output
_____no_output_____
###Markdown
Delete explainabilities:
###Code
#util.wait_till_delete(lambda: forecast.delete_explainability(ExplainabilityArn = explainability_item_level_arn))
#util.wait_till_delete(lambda: forecast.delete_explainability(ExplainabilityArn = explainability_item_level_batch2_arn))
#util.wait_till_delete(lambda: forecast.delete_explainability(ExplainabilityArn = explainability_model_level_arn))
#util.wait_till_delete(lambda: forecast.delete_explainability(ExplainabilityArn = explainability_item_and_timepoint_level_arn))
###Output
_____no_output_____
###Markdown
Delete forecast:
###Code
#util.wait_till_delete(lambda: forecast.delete_forecast(ForecastArn = forecast_arn))
###Output
_____no_output_____
###Markdown
Delete predictor:
###Code
#util.wait_till_delete(lambda: forecast.delete_predictor(PredictorArn = predictor_arn))
###Output
_____no_output_____
###Markdown
Delete dataset imports for TTS, RTS and IM:
###Code
#util.wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn=ts_dataset_import_job_arn))
#util.wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn=rts_dataset_import_job_arn))
#util.wait_till_delete(lambda: forecast.delete_dataset_import_job(DatasetImportJobArn=im_dataset_import_job_arn))
###Output
_____no_output_____
###Markdown
Delete the datasets for TTS, RTS and IM
###Code
#util.wait_till_delete(lambda: forecast.delete_dataset(DatasetArn=ts_dataset_arn))
#util.wait_till_delete(lambda: forecast.delete_dataset(DatasetArn=rts_dataset_arn))
#util.wait_till_delete(lambda: forecast.delete_dataset(DatasetArn=im_dataset_arn))
###Output
_____no_output_____
###Markdown
Delete the dataset group
###Code
#forecast.delete_dataset_group(DatasetGroupArn=dataset_group_arn)
###Output
_____no_output_____
###Markdown
Delete the IAM role
###Code
#util.delete_iam_role( role_name )
###Output
_____no_output_____ |
tutorials/old_generation_notebooks/colab/5- How to use Spark NLP and Spark ML Pipelines.ipynb | ###Markdown
 Spark NLP and Spark ML Pipelines
###Code
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp
###Output
_____no_output_____
###Markdown
Simple Topic Modeling`Spark-NLP`* DocumentAssembler* SentenceDetector* Tokenizer* Normalizer* POS tagger* Chunker* Finisher`Spark ML`* Hashing* TF-IDF* LDA
###Code
import sys
import time
from pyspark.sql.functions import col
from pyspark.ml.feature import CountVectorizer, HashingTF, IDF, Tokenizer
from pyspark.ml.clustering import LDA, LDAModel
#Spark NLP
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.annotator import *
from sparknlp.common import RegexRule
from sparknlp.base import *
###Output
_____no_output_____
###Markdown
Let's create a Spark Session for our app
###Code
spark = sparknlp.start()
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
###Output
Spark NLP version: 2.4.2
Apache Spark version: 2.4.4
###Markdown
Let's download some scientific sample from PubMed dataset:```wget -N https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/resources/en/pubmed/pubmed-sample.csv -P /tmp```
###Code
! wget -N https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/resources/en/pubmed/pubmed-sample.csv -P /tmp
pubMedDF = spark.read\
.option("header", "true")\
.csv("/tmp/pubmed-sample.csv")\
.filter("AB IS NOT null")\
.withColumn("text", col("AB"))\
.drop("TI", "AB")
pubMedDF.printSchema()
pubMedDF.show()
pubMedDF.count()
pubMedDF = pubMedDF.limit(2000)
###Output
_____no_output_____
###Markdown
Let's create Spark-NLP Pipeline
###Code
# Spark NLP Pipeline
document_assembler = DocumentAssembler() \
.setInputCol("text")
sentence_detector = SentenceDetector() \
.setInputCols(["document"]) \
.setOutputCol("sentence")
tokenizer = Tokenizer() \
.setInputCols(["sentence"]) \
.setOutputCol("token")
posTagger = PerceptronModel.pretrained() \
.setInputCols(["sentence", "token"])
chunker = Chunker() \
.setInputCols(["sentence", "pos"]) \
.setOutputCol("chunk") \
.setRegexParsers(["<NNP>+", "<DT>?<JJ>*<NN>"])
finisher = Finisher() \
.setInputCols(["chunk"]) \
.setIncludeMetadata(False)
nlpPipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
posTagger,
chunker,
finisher
])
nlpPipelineDF = nlpPipeline.fit(pubMedDF).transform(pubMedDF)
###Output
_____no_output_____
###Markdown
Let's create Spark ML Pipeline
###Code
# SPark ML Pipeline
cv = CountVectorizer(inputCol="finished_chunk", outputCol="features", vocabSize=1000, minDF=10.0, minTF=10.0)
idf = IDF(inputCol="features", outputCol="idf")
lda = LDA(k=10, maxIter=5)
### Let's create Spark-NLP Pipeline
mlPipeline = Pipeline(stages=[
cv,
idf,
lda
])
###Output
_____no_output_____
###Markdown
We are going to train Spark ML Pipeline by using Spark-NLP Pipeline
###Code
# Let's create Spark-NLP Pipeline
mlModel = mlPipeline.fit(nlpPipelineDF)
mlPipelineDF = mlModel.transform(nlpPipelineDF)
mlPipelineDF.show()
ldaModel = mlModel.stages[2]
ll = ldaModel.logLikelihood(mlPipelineDF)
lp = ldaModel.logPerplexity(mlPipelineDF)
print("The lower bound on the log likelihood of the entire corpus: " + str(ll))
print("The upper bound on perplexity: " + str(lp))
# Describe topics.
print("The topics described by their top-weighted terms:")
ldaModel.describeTopics(3).show(truncate=False)
###Output
_____no_output_____
###Markdown
Let's look at out topicsNOTE: More cleaning, filtering, playing around with `CountVectorizer`, and more iterations in `LDA` will result in better Topic Modelling results.
###Code
# Output topics. Each is a distribution over words (matching word count vectors)
print("Learned topics (as distributions over vocab of " + str(ldaModel.vocabSize())
+ " words):")
topics = ldaModel.describeTopics(50)
topics_rdd = topics.rdd
vocab = mlModel.stages[0].vocabulary
topics_words = topics_rdd\
.map(lambda row: row['termIndices'])\
.map(lambda idx_list: [vocab[idx] for idx in idx_list])\
.collect()
for idx, topic in enumerate(topics_words):
print("topic: ", idx)
print("----------")
for word in topic:
print(word)
print("----------")
###Output
_____no_output_____
###Markdown
 Spark NLP and Spark ML Pipelines
###Code
import os
# Install java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install pyspark
! pip install --ignore-installed pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp
###Output
_____no_output_____
###Markdown
Simple Topic Modeling`Spark-NLP`* DocumentAssembler* SentenceDetector* Tokenizer* Normalizer* POS tagger* Chunker* Finisher`Spark ML`* Hashing* TF-IDF* LDA
###Code
import sys
import time
from pyspark.sql.functions import col
from pyspark.ml.feature import CountVectorizer, HashingTF, IDF, Tokenizer
from pyspark.ml.clustering import LDA, LDAModel
#Spark NLP
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
from sparknlp.annotator import *
from sparknlp.common import RegexRule
from sparknlp.base import *
###Output
_____no_output_____
###Markdown
Let's create a Spark Session for our app
###Code
spark = sparknlp.start()
print("Spark NLP version: ", sparknlp.version())
print("Apache Spark version: ", spark.version)
###Output
_____no_output_____
###Markdown
Let's download some scientific sample from PubMed dataset:```wget -N https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/resources/en/pubmed/pubmed-sample.csv -P /tmp```
###Code
! wget -N https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/resources/en/pubmed/pubmed-sample.csv -P /tmp
pubMedDF = spark.read\
.option("header", "true")\
.csv("/tmp/pubmed-sample.csv")\
.filter("AB IS NOT null")\
.withColumn("text", col("AB"))\
.drop("TI", "AB")
pubMedDF.printSchema()
pubMedDF.show()
pubMedDF.count()
pubMedDF = pubMedDF.limit(2000)
###Output
_____no_output_____
###Markdown
Let's create Spark-NLP Pipeline
###Code
# Spark NLP Pipeline
document_assembler = DocumentAssembler() \
.setInputCol("text")
sentence_detector = SentenceDetector() \
.setInputCols(["document"]) \
.setOutputCol("sentence")
tokenizer = Tokenizer() \
.setInputCols(["sentence"]) \
.setOutputCol("token")
posTagger = PerceptronModel.pretrained() \
.setInputCols(["sentence", "token"])
chunker = Chunker() \
.setInputCols(["sentence", "pos"]) \
.setOutputCol("chunk") \
.setRegexParsers(["<NNP>+", "<DT>?<JJ>*<NN>"])
finisher = Finisher() \
.setInputCols(["chunk"]) \
.setIncludeMetadata(False)
nlpPipeline = Pipeline(stages=[
document_assembler,
sentence_detector,
tokenizer,
posTagger,
chunker,
finisher
])
nlpPipelineDF = nlpPipeline.fit(pubMedDF).transform(pubMedDF)
###Output
_____no_output_____
###Markdown
Let's create Spark ML Pipeline
###Code
# SPark ML Pipeline
cv = CountVectorizer(inputCol="finished_chunk", outputCol="features", vocabSize=1000, minDF=10.0, minTF=10.0)
idf = IDF(inputCol="features", outputCol="idf")
lda = LDA(k=10, maxIter=5)
### Let's create Spark-NLP Pipeline
mlPipeline = Pipeline(stages=[
cv,
idf,
lda
])
###Output
_____no_output_____
###Markdown
We are going to train Spark ML Pipeline by using Spark-NLP Pipeline
###Code
# Let's create Spark-NLP Pipeline
mlModel = mlPipeline.fit(nlpPipelineDF)
mlPipelineDF = mlModel.transform(nlpPipelineDF)
mlPipelineDF.show()
ldaModel = mlModel.stages[2]
ll = ldaModel.logLikelihood(mlPipelineDF)
lp = ldaModel.logPerplexity(mlPipelineDF)
print("The lower bound on the log likelihood of the entire corpus: " + str(ll))
print("The upper bound on perplexity: " + str(lp))
# Describe topics.
print("The topics described by their top-weighted terms:")
ldaModel.describeTopics(3).show(truncate=False)
###Output
_____no_output_____
###Markdown
Let's look at out topicsNOTE: More cleaning, filtering, playing around with `CountVectorizer`, and more iterations in `LDA` will result in better Topic Modelling results.
###Code
# Output topics. Each is a distribution over words (matching word count vectors)
print("Learned topics (as distributions over vocab of " + str(ldaModel.vocabSize())
+ " words):")
topics = ldaModel.describeTopics(50)
topics_rdd = topics.rdd
vocab = mlModel.stages[0].vocabulary
topics_words = topics_rdd\
.map(lambda row: row['termIndices'])\
.map(lambda idx_list: [vocab[idx] for idx in idx_list])\
.collect()
for idx, topic in enumerate(topics_words):
print("topic: ", idx)
print("----------")
for word in topic:
print(word)
print("----------")
###Output
_____no_output_____ |
algorithm/Search.ipynb | ###Markdown
Binary Search
###Code
def binary_search(nums: List[int], target: int) -> int:
left = 0
right = len(nums) - 1
while left <= right:
mid = (left + right) // 2
if nums[mid] == target:
return mid
elif nums[mid] < target:
left = mid + 1
elif nums[mid] > target:
right = mid -1
return -1
nums = [1,2,2,2,3]
binary_search(nums, 2)
def binary_search_left(nums: List[int], target: int) -> int:
left = 0
right = len(nums)
while left < right:
mid = (left + right) // 2
if nums[mid] < target:
left = mid + 1
else:
# nums[mid] >= target
right = mid
return left if nums[left] == target else -1
binary_search_left(nums, 2)
def binary_search_right(nums: List[int], target: int) -> int:
left = 0
right = len(nums)
while left < right:
mid = (left + right) // 2
if nums[mid] == target:
left = mid + 1
elif nums[mid] < target:
left = mid + 1
elif nums[mid] > target:
right = mid
return left - 1
binary_search_right(nums, 2)
###Output
_____no_output_____
###Markdown
Koko Balana
###Code
from math import ceil
def canFinish(piles: List[int], speed: int, H: int):
return sum([timeof(n, speed) for n in piles]) <= H
def timeof(n: int, speed: int):
return ceil(n / speed)
def min_eating_speed_forch(piles: List[int], H: int):
_max = max(piles)
for speed in range(1, _max):
if (canFinish(piles, speed, H)):
return speed
return _max
piles = [3, 6, 7, 11]
H = 8
min_eating_speed_forch(piles, H)
piles = [30, 11, 23, 4, 20]
H = 5
min_eating_speed_forch(piles, H)
def min_eating_speed_bs(piles: List[int], H: int):
left = 1
right = max(piles) + 1
while left < right:
mid = left + (right - left) // 2
if canFinish(piles, mid, H):
right = mid
else:
left = mid + 1
return left
piles = [30, 11, 23, 4, 20]
H = 5
min_eating_speed_bs(piles, H)
###Output
_____no_output_____ |
fp_bert.ipynb | ###Markdown
Load the datasets
###Code
train = pd.read_csv("data/Constraint_Train.csv")
val = pd.read_csv("data/Constraint_Val.csv")
test = pd.read_csv("data/english_test_with_labels.csv")
train_c, train_l = train['tweet'].to_numpy(), train['label'].to_numpy()
train_l = np.array([0 if i == 'fake' else 1 for i in train_l])
val_c, val_l = val['tweet'].to_numpy(), val['label'].to_numpy()
val_l = np.array([0 if i == 'fake' else 1 for i in val_l])
test_c, test_l = test['tweet'].to_numpy(), test['label'].to_numpy()
test_l = np.array([0 if i == 'fake' else 1 for i in test_l])
# print(train_c)
# display(train.head())
###Output
_____no_output_____
###Markdown
Preprocess ---
###Code
wordnet_lemmatizer = WordNetLemmatizer()
porter_stemmer = PorterStemmer()
p.set_options(p.OPT.URL, p.OPT.EMOJI)
def preprocess(row, lemmatizer, stemmer):
txt = row
txt = p.clean(txt)
tokenization = nltk.word_tokenize(txt)
tokenization = [w for w in tokenization if not w in stop_words]
# txt = ' '.join([porter_stemmer.stem(w) for w in tokenization])
# txt = ' '.join([lemmatizer.lemmatize(w) for w in txt])
txt = re.sub(r'[^a-zA-Z ]', '', txt).lower().strip()
return txt
train_c = [preprocess(x, wordnet_lemmatizer, porter_stemmer) for x in train_c]
val_c = [preprocess(x, wordnet_lemmatizer, porter_stemmer) for x in val_c]
test_c = [preprocess(x, wordnet_lemmatizer, porter_stemmer) for x in test_c]
###Output
_____no_output_____
###Markdown
---
###Code
# the model will remember only the top 20000 most common words
max_words = 20000
max_len = 300
voc = np.array(train_c + val_c + test_c)
token = Tokenizer(num_words=max_words, lower=True, split=' ')
token.fit_on_texts(voc)
sequences = token.texts_to_sequences(train_c)
train_sequences_padded = pad_sequences(sequences, maxlen=max_len)
sequences = token.texts_to_sequences(val_c)
val_sequences_padded = pad_sequences(sequences, maxlen=max_len)
sequences = token.texts_to_sequences(test_c)
test_sequences_padded = pad_sequences(sequences, maxlen=max_len)
def net():
model = Sequential([
Embedding(max_words, 100, input_length=300),
Conv1D(32, 8, activation='relu', padding="same"),
MaxPooling1D(2),
LSTM(32),
Dense(10, activation="relu"),
Dropout(0.5),
Dense(1, activation="sigmoid")
])
opt = tf.keras.optimizers.Adam(learning_rate=0.0005)
model.compile(loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=opt, metrics=['accuracy'])
return model
model = net()
model.fit(train_sequences_padded, train_l, batch_size=32, epochs=5,
validation_data=(val_sequences_padded, val_l))
pred = model.predict(test_sequences_padded)
pred = pred.reshape(2140)
pred = np.array([0 if i < 0.5 else 1 for i in pred])
acc = accuracy_score(test_l, pred)
print(acc)
###Output
0.935981308411215
|
Python_Stock/Candlestick_Patterns/Candlestick_Two_Crows.ipynb | ###Markdown
Candlestick Two Crows https://www.investopedia.com/terms/u/upside-gap-two-crows.asp
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import talib
import warnings
warnings.filterwarnings("ignore")
# yahoo finance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'AMD'
start = '2018-01-01'
end = '2020-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
###Output
[*********************100%***********************] 1 of 1 completed
###Markdown
Candlestick with Two Crows
###Code
from matplotlib import dates as mdates
import datetime as dt
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = pd.to_datetime(dfc['Date'])
dfc['Date'] = dfc['Date'].apply(mdates.date2num)
dfc.head()
from mplfinance.original_flavor import candlestick_ohlc
fig = plt.figure(figsize=(14,10))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax.grid(True, which='both')
ax.minorticks_on()
axv = ax.twinx()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
two_crows = talib.CDL2CROWS(df['Open'], df['High'], df['Low'], df['Close'])
two_crows = two_crows[two_crows != 0]
df['two_crows'] = talib.CDL2CROWS(df['Open'], df['High'], df['Low'], df['Close'])
df.loc[df['two_crows'] !=0]
df['Adj Close'].loc[df['two_crows'] !=0]
df['Adj Close'].loc[df['two_crows'] !=0].index
two_crows
two_crows.index
df
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax.grid(True, which='both')
ax.minorticks_on()
axv = ax.twinx()
ax.plot_date(df['Adj Close'].loc[df['two_crows'] !=0].index, df['Adj Close'].loc[df['two_crows'] !=0],
'om', # marker style 'o', color 'g'
fillstyle='none', # circle is not filled (with color)
ms=10.0)
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
###Output
_____no_output_____
###Markdown
Plot Certain dates
###Code
df = df['2019-01-01':'2019-04-30']
dfc = df.copy()
dfc['VolumePositive'] = dfc['Open'] < dfc['Adj Close']
#dfc = dfc.dropna()
dfc = dfc.reset_index()
dfc['Date'] = pd.to_datetime(dfc['Date'])
dfc['Date'] = dfc['Date'].apply(mdates.date2num)
dfc.head()
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
ax.grid(True, which='both')
ax.minorticks_on()
axv = ax.twinx()
ax.plot_date(df['Adj Close'].loc[df['two_crows'] !=0].index, df['Adj Close'].loc[df['two_crows'] !=0],
'hk', # marker style 'o', color 'g'
fillstyle='none', # circle is not filled (with color)
ms=30.0)
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
###Output
_____no_output_____
###Markdown
Highlight Candlestick
###Code
from matplotlib.dates import date2num
from datetime import datetime
fig = plt.figure(figsize=(20,16))
ax = plt.subplot(2, 1, 1)
candlestick_ohlc(ax,dfc.values, width=0.5, colorup='g', colordown='r', alpha=1.0)
ax.xaxis_date()
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d-%m-%Y'))
#ax.grid(True, which='both')
#ax.minorticks_on()
axv = ax.twinx()
ax.axvspan(date2num(datetime(2019,1,8)), date2num(datetime(2019,1,9)),
label="Two Crows",color="green", alpha=0.3)
ax.axvspan(date2num(datetime(2019,3,1)), date2num(datetime(2019,3,4)),
label="Two Crows",color="green", alpha=0.3)
ax.legend()
colors = dfc.VolumePositive.map({True: 'g', False: 'r'})
axv.bar(dfc.Date, dfc['Volume'], color=colors, alpha=0.4)
axv.axes.yaxis.set_ticklabels([])
axv.set_ylim(0, 3*df.Volume.max())
ax.set_title('Stock '+ symbol +' Closing Price')
ax.set_ylabel('Price')
###Output
_____no_output_____ |
nbplain/01_jupyter.ipynb | ###Markdown
Jupyter Notebooks Overview:- **Teaching:** 10 min- **Exercises:** 10 min**Questions*** What is a Jupyter notebook?* Why should I use one?**Objectives*** Create a Jupyter notebook* Enter code into the notebook* Augment the code with a description in markdown You should have a Python 3 notebook open, if not take a look at the setup page (the previous page in the schedule). Information: Trying things outIn this lesson you can have a Python 3 jupyter notebook open to try out any of the commands you see here and reproduce the results. Python in notebooksJupyter provides a nice web-based interface to Python. In the below cells you can type a line of Python. For example, here we type in```pythona = 5```When we press `SHIFT+Return` this Python line is interpreted interactively
###Code
a = 5
###Output
_____no_output_____
###Markdown
To see that this has worked, let's print the value of `a`
###Code
print(a)
###Output
5
###Markdown
You can type whatever Python you want, and it will be evaluated interactively. For example...
###Code
b = 10
print(a + b)
###Output
15
###Markdown
One of the cool things about a Jupyter Python notebook is that you can edit the above cells and re-execute them. For example, change the value of `b` above and re-execute the lines [3] and [4] (by selecting the cell and pressing `SHIFT+Return`. Information: Out of order executionThe ability to go back and change only small snippets of code is very useful, but also very dangerous form a coding point of view. If you edit a code cell and don't run _all_ the code cells after it, then any cell that isn't re-executed is still using the old code. Jupyter allows you to keep track of this by numbering its input, `In [3]` for instance means this block was executed third.If you get in a complete mess you can also clear all output, without removing the input and re-execute the code blocks in order. All of the standard Python help is available from within the notebook. This means that you can type```pythonhelp(something)```to get help about `something`. For example, type and execute `help(print)` to get help about the print function.
###Code
help(print)
###Output
Help on built-in function print in module builtins:
print(...)
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
###Markdown
Jupyter Notebooks Overview- **Teaching:** 10 min- **Exercises:** 10 min**Questions*** What is a Jupyter notebook?* Why should I use one?**Objectives*** Create a Jupyter notebook* Enter code into the notebook* Augment the code with a description in markdown You should have a Python 3 notebook open, if not take a look at the setup page (the previous page in the schedule). Info: Trying things outIn this lesson you can have a Python 3 jupyter notebook open to try out any of the commands you see here and reproduce the results. Python in notebooksJupyter provides a nice web-based interface to Python. In the below cells you can type a line of Python. For example, here we type in```pythona = 5```When we press `SHIFT+Return` this Python line is interpreted interactively
###Code
a = 5
###Output
_____no_output_____
###Markdown
To see that this has worked, let's print the value of `a`
###Code
print(a)
###Output
5
###Markdown
You can type whatever Python you want, and it will be evaluated interactively. For example...
###Code
b = 10
print(a + b)
###Output
15
###Markdown
One of the cool things about a Jupyter Python notebook is that you can edit the above cells and re-execute them. For example, change the value of `b` above and re-execute the lines [3] and [4] (by selecting the cell and pressing `SHIFT+Return`. Info: Out of order executionThe ability to go back and change only small snippets of code is very useful, but also very dangerous form a coding point of view. If you edit a code cell and don't run _all_ the code cells after it, then any cell that isn't re-executed is still using the old code. Jupyter allows you to keep track of this by numbering its input, `In [3]` for instance means this block was executed third.If you get in a complete mess you can also clear all output, without removing the input and re-execute the code blocks in order. All of the standard Python help is available from within the notebook. This means that you can type```pythonhelp(something)```to get help about `something`. For example, type and execute `help(print)` to get help about the print function.
###Code
help(print)
###Output
Help on built-in function print in module builtins:
print(...)
print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)
Prints the values to a stream, or to sys.stdout by default.
Optional keyword arguments:
file: a file-like object (stream); defaults to the current sys.stdout.
sep: string inserted between values, default a space.
end: string appended after the last value, default a newline.
flush: whether to forcibly flush the stream.
|
notebooks/ML_explore_data_hyper_pars.ipynb | ###Markdown
Notebook to explore files and test ML stuff on intrinsic data set The idea is to compare this to Rikhav's cut based analysis.
###Code
import numpy as np
import glob
import os
import tqdm
import itertools
import matplotlib.pyplot as plt
import time
import copy
from fastespy.readpydata import convert_data_to_ML_format
from fastespy.plotting import plot_2d_hist, plot_scatter_w_hist
%matplotlib inline
###Output
_____no_output_____
###Markdown
Explore the dataLet's take a look at the data from Rikhav's intrinsic runs, done at $R_N = 0.3$ and a gain width product of 1.5 GHz. The trigger threshold was set to 20mV.
###Code
path = "../../../data-01152021/"
files = glob.glob(os.path.join(path, '*.npy'))
print(len(files))
print(files[0])
###Output
17
../../../data-01152021/0.3RN-1.5GHzGBWP-intrinsics-50MHz-20mV-1day-16-fit000.npy
###Markdown
Load one file and inspect it's contents:
###Code
x = np.load(files[0], allow_pickle=True).tolist()
print(len(x.keys())) # number of recorded triggers
#print(x.keys()) # in Rikhav's files, the keys are integers from 1,..., N_trigger
x[1]
###Output
1709
###Markdown
From this first look, define the key words that you would like to save
###Code
feature_names = []
remove = ['data', 'time', 'pulse integral raw', 'voltage error',
'error', 'start time in hrs', 'end time in hrs',
'trigger time'
]
for k in x[1].keys():
if not k in remove and not 'error' in k:
feature_names.append(k)
print(feature_names)
result = {'type': []}
t_tot_hrs = 0.
# loop through files
for f in tqdm.tqdm(files):
x = np.load(f, allow_pickle=True).tolist()
# for each file: calculate observation time
t_start = 1e10
t_stop = 0.
# loop through triggers
for i in range(1,len(x.keys())+1):
for name in feature_names:
if not name in result.keys():
result[name] = []
result[name].append(x[i][name])
if 'intrinsic' in f:
if x[i]['end time in hrs'] > t_stop:
t_stop = x[i]['end time in hrs']
if x[i]['start time in hrs'] < t_start:
t_start = x[i]['start time in hrs']
result['type'].append(0)
if 'light' in f:
result['type'].append(1)
if 'intrinsic' in f:
t_tot_hrs += t_stop - t_start # only add for dark count rate
# convert into into numpy arrays
for k, v in result.items():
if k == 'type':
dtype = np.bool
else:
dtype = np.float32
result[k] = np.array(v, dtype=dtype)
len(result['rise time'])
###Output
_____no_output_____
###Markdown
Plot some histogramsRikhav said the following: "The fit parameters I use for then selecting data are pulse height, amplitude, pulse integral, the exponential rise and decay times, and the chi2"
###Code
# define labels
label = {
'rise time': r'Rise time $(\mu\mathrm{s})$',
'decay time': r'Decay time $(\mu\mathrm{s})$',
'pulse height': r'Pulse height (mV)',
'amplitude': r'Amplitude (mV)',
'pulse integral fit': r'Pulse Integral (mV $\mu$s)',
'chi2 reduced': r'$\chi^2/$d.o.f.',
'constant': 'Constant offset (mV)'
}
k = 'rise time'
bins = np.logspace(-9., -3., 100)
for i in range(2):
m = result['type'] == i
plt.hist(result[k][m], bins=bins, label=i, density=True, alpha=0.5)
plt.legend()
plt.xlabel(label[k])
plt.gca().set_xscale('log')
k = 'decay time'
bins = np.logspace(-7., -2., 100)
for i in range(2):
m = result['type'] == i
plt.hist(result[k][m], bins=bins, label=i, density=True, alpha=0.5)
plt.xlabel(label[k])
plt.gca().set_xscale('log')
k = 'pulse integral fit'
bins = np.logspace(-8., -4., 100)
for i in range(2):
m = result['type'] == i
plt.hist(-result[k][m], bins=bins, label=i, density=True, alpha=0.5)
plt.xlabel(label[k])
plt.gca().set_xscale('log')
k = 'chi2 reduced'
bins = np.logspace(-1, 2, 100)
for i in range(2):
m = result['type'] == i
plt.hist(result[k][m], bins=bins, label=i, density=True, alpha=0.5)
plt.xlabel(label[k])
plt.gca().set_xscale('log')
k = 'pulse height'
bins = np.logspace(-2, -1, 100)
for i in range(2):
m = result['type'] == i
plt.hist(-result[k][m], bins=bins, label=i, density=True, alpha=0.5)
plt.xlabel(label[k])
plt.gca().set_xscale('log')
k = 'amplitude'
bins = np.logspace(-3, -1, 100)
for i in range(2):
m = result['type'] == i
plt.hist(result[k][m], bins=bins, label=i, alpha=0.5)
plt.xlabel(label[k])
plt.gca().set_xscale('log')
k = 'constant'
bins = np.logspace(-5, -1, 100)
for i in range(2):
m = result['type'] == i
plt.hist(result[k][m], bins=bins, label=i, density=True, alpha=0.5)
plt.xlabel(label[k])
plt.gca().set_xscale('log')
###Output
_____no_output_____
###Markdown
Apply machine learningTest machine learning algorithms to separate light from background.
###Code
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import learning_curve
from sklearn.metrics import get_scorer, make_scorer, fbeta_score
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.metrics import ConfusionMatrixDisplay
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
X, y = convert_data_to_ML_format(result, feature_names, bkg_type=0, signal_type=1)
print(np.sum(y == 1), np.sum(y == 0))
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=42,
stratify=y, # use class labels, since large imbalance between light and bkg
test_size=0.2
)
# preprocess data: zero mean, standard deviation of 1
scaler = StandardScaler().fit(X_train)
print(scaler.mean_)
print(scaler.scale_)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
print(X_train_scaled.mean(axis=0))
print(X_train_scaled.std(axis=0))
print(X_test_scaled.mean(axis=0))
print(X_test_scaled.std(axis=0))
print(np.sum(y_train == 1), np.sum(y_train == 0))
print(np.sum(y_test == 1), np.sum(y_test == 0))
# With the test data, perform a K-fold cross validation to
# find the best hyper parameters.
#kf = KFold(n_splits=5, shuffle=True, random_state=42)
kf = StratifiedKFold(n_splits=5, shuffle=True, random_state=42) # retain same percentage of bkg/light samples in each fold
###Output
_____no_output_____
###Markdown
Classifiers and parameter grids for hyper parameter optimization:
###Code
clf = dict(
dt=DecisionTreeClassifier,
bdt=GradientBoostingClassifier,
rf=RandomForestClassifier,
mlp=MLPClassifier
)
param_grid = dict(
dt={'ccp_alpha': np.linspace(0., 0.001, 5),
'min_samples_split': np.arange(2, 53, 10),
'max_depth': np.arange(2, 21, 1)
},
# bdt={'n_estimators': np.arange(100, 1100, 200),
# 'learning_rate': np.arange(0.1, 1.1, 0.2),
# 'max_depth': np.arange(2, 6, 2)
# },
# smaller grid
bdt={'n_estimators': np.arange(100, 1100, 400),
'learning_rate': np.arange(0.1, 1.1, 0.4),
'max_depth': np.arange(2, 10, 3)
},
rf={'n_estimators': np.arange(100, 600, 200),
'max_features': np.arange(1, 6, 2),
'min_samples_split': np.arange(2, 82, 20),
},
mlp={
'hidden_layer_sizes': ((10,), (50,), (10, 10), (50, 50), (10, 10, 10), (50, 50, 50)),
'alpha': 10.**np.arange(-4, 0.5, 0.5)
}
)
default_pars = dict(
dt={'criterion': 'gini',
'min_samples_leaf': 1,
},
bdt={'loss': 'deviance',
'min_samples_split': 2
},
rf={'criterion': 'gini',
'max_depth': None, # fully grown trees
},
mlp={'learning_rate' : 'constant',
'activation': 'relu',
'max_iter': 1000,
'solver': 'adam',
'shuffle': True,
'tol': 1e-4
}
)
###Output
_____no_output_____
###Markdown
Generate my own scorer the optimizes the detection significance. Numbers taken from ALPS design requirement document.
###Code
def significance_scorer(y, y_pred,
n_s=2.8e-5, # assumed signal rate in Hz
e_d=0.5, # detector efficiency
t_obs=20. * 24. * 3600., # observation time in seconds
N_tot=1000 # total number of triggers for t_obs
):
"""
Scorer that scales upwards for better detection significance,
see https://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html
Parameters
----------
y: array-like
true labels
y_pred: array-like
predicted labels
t_obs: float
observation time in seconds
n_s: float
signal rate from regenerated photons
e_d: fload
Detector efficiency
N_tot: int
Total number of triggers recorded in t_obs
Returns
-------
Detection significance
"""
# true positive rate
# this is also the analysis efficiency
tp_rate = np.sum(y * y_pred) / float(np.sum(y))
# misidentified background events
# false positive rate
fp_rate = np.sum((y_pred == 1) & (y == 0)) / float(len(y))
# from this, you get the dark current
n_b = fp_rate * N_tot / t_obs
S = 2. * (np.sqrt(e_d * tp_rate * n_s + n_b) - np.sqrt(n_b)) * np.sqrt(t_obs)
return S
###Output
_____no_output_____
###Markdown
Create a scorer
###Code
sig_score = make_scorer(significance_scorer,
greater_is_better=True,
t_obs=t_tot_hrs * 3600.,
N_tot=y.size)
###Output
_____no_output_____
###Markdown
The scoring options:
###Code
scoring = {'AUC': 'roc_auc',
'Accuracy': 'accuracy',
#'Precision': 'precision',
'Significance': sig_score
# 'Recall': 'recall',
# 'F_1': 'f1',
# 'F_2': make_scorer(fbeta_score, beta=2),
# 'F_{1/2}': make_scorer(fbeta_score, beta=0.5),
}
refit = 'Significance'
###Output
_____no_output_____
###Markdown
Which classifier to test:
###Code
classifier = 'dt'
classifier = 'bdt'
print(param_grid[classifier])
gs = GridSearchCV(clf[classifier](random_state=42, **default_pars[classifier]),
param_grid=param_grid[classifier],
scoring=scoring,
refit=refit,
#scoring=sig_score, #only use significance
#refit=True,
return_train_score=True,
cv=kf,
verbose=1,
n_jobs=8
)
t0 = time.time()
#gs.fit(X_train, y_train)
gs.fit(X_train_scaled, y_train)
t1 = time.time()
print("The parameter search took {0:.2f} s".format(t1-t0))
results = dict(gs_cv=gs.cv_results_)
###Output
_____no_output_____
###Markdown
Post processing
###Code
def profile_params(results, scoring, clfid):
"""
Profile the parameters.
For each value of a parameter, compute
the best mean test and train scores and standard deviations
from profiling, i.e., for each value of a parameter
set the other grid parameters to the values that maximize the score.
"""
mean_best_test = {}
mean_best_train = {}
std_best_test = {}
std_best_train = {}
for score in scoring.keys():
mean_best_test[score] = {}
mean_best_train[score] = {}
std_best_test[score] = {}
std_best_train[score] = {}
for param, v in param_grid[clfid].items():
mean_best_test[score][param] = np.zeros_like(v).astype(np.float)
mean_best_train[score][param] = np.zeros_like(v).astype(np.float)
std_best_test[score][param] = np.zeros_like(v).astype(np.float)
std_best_train[score][param] = np.zeros_like(v).astype(np.float)
for i, vi in enumerate(v):
# create a mask where flattened param array corresponds to k = vi
if param == 'hidden_layer_sizes':
m = []
for x in results['param_hidden_layer_sizes'].data:
m.append(x == vi)
m = np.array(m)
else:
m = results[f'param_{param}'] == vi
# get the best value for this vi
idmax_test = np.argmax(results[f'mean_test_{score}'][m])
idmax_train = np.argmax(results[f'mean_train_{score}'][m])
mean_best_test[score][param][i] = results[f'mean_test_{score}'][m][idmax_test]
std_best_test[score][param][i] = results[f'std_test_{score}'][m][idmax_test]
mean_best_train[score][param][i] = results[f'mean_train_{score}'][m][idmax_train]
std_best_train[score][param][i] = results[f'std_train_{score}'][m][idmax_train]
return mean_best_test, std_best_test, mean_best_train, std_best_train
mt, st, mtr, sstr = profile_params(gs.cv_results_, scoring, classifier)
results['profile'] = dict(mean_test=mt, std_test=st, mean_train=mtr, std_train=sstr)
# loop over scoring:
# get the best param index
# and for the estimator with these params,
# calcuate the score.
results['best_params'] = dict()
results['score_validation'] = dict()
results['learning_curve'] = dict()
results['confusion_matrix_test'] = dict()
results['confusion_matrix_train'] = dict()
results['classification_report'] = dict()
results['bkg_pred_test'] = dict()
results['tp_efficiency_test'] = dict()
results['bkg_pred_train'] = dict()
results['tp_efficiency_train'] = dict()
results['score_train'] = dict()
# for learning curve
train_sizes = (np.arange(0.1, 0.9, 0.1) * y_train.shape).astype(np.int)
#X_use_test = X_test
#X_use_train = X_train
X_use_test = X_test_scaled
X_use_train = X_train_scaled
for k, v in scoring.items():
scorer = get_scorer(v)
# get the best index for parameters
best_index = np.nonzero(gs.cv_results_[f'rank_test_{k:s}'] == 1)[0][0]
results['best_params'][k] = copy.deepcopy(default_pars[classifier])
results['best_params'][k].update(gs.cv_results_['params'][best_index])
# init an estimator with the best parameters
#if k == 'Significance':
# results['best_params'][k]['max_depth'] = 19
# results['best_params'][k]['min_samples_split'] = 2
best_clf = clf[classifier](random_state=42, **results['best_params'][k])
best_clf.fit(X_use_train, y_train)
y_pred_test = best_clf.predict(X_use_test)
y_pred_train = best_clf.predict(X_use_train)
results['score_validation'][k] = scorer(best_clf, X_use_test, y_test)
results['score_train'][k] = scorer(best_clf, X_use_train, y_train)
# create a learning curve for the best classifier
train_sizes, train_scores, valid_scores = learning_curve(best_clf, X_use_train, y_train,
train_sizes=train_sizes,
cv=kf,
verbose=1,
n_jobs=8)
results['learning_curve'][k] = (train_sizes, train_scores, valid_scores)
# get the confusion matrix for the best classifier
results['confusion_matrix_test'][k] = confusion_matrix(y_test, y_pred_test)
results['confusion_matrix_train'][k] = confusion_matrix(y_train, y_pred_train)
# get the classification report for the best classifier
results['classification_report'][k] = classification_report(y_test, y_pred_test,
output_dict=True,
labels=[0, 1],
target_names=['bkg', 'light']
)
fp_test = results['confusion_matrix_test'][k][0,1] # false positive
fp_test_rate = fp_test / y_pred_test.size # false positive rate
results['bkg_pred_test'][k] = fp_test_rate * y.size
fp_train = results['confusion_matrix_train'][k][0,1] # false positive
fp_train_rate = fp_test / y_pred_train.size # false positive rate
results['bkg_pred_train'][k] = fp_train_rate * y.size
# efficiency of identifying light
tp_test = results['confusion_matrix_test'][k][1,1] # true positive
tp_train = results['confusion_matrix_train'][k][1,1] # true positive
results['tp_efficiency_test'][k] = tp_test / (y_test == 1).sum()
results['tp_efficiency_train'][k] = tp_train / (y_train == 1).sum()
for k, v in scoring.items():
print(k, results['best_params'][k])
for k, v in scoring.items():
print(k, results['score_validation'][k], results['score_train'][k])
###Output
AUC 0.9998407285106792 1.0
Accuracy 0.9973939824686093 1.0
Significance 2.9181554313336378 10.218114061346759
###Markdown
Plot the resultsFirst plot the **parameter profiles**
###Code
plt.figure(figsize=(4*3, 4))
ax = []
for i, score in enumerate(['Significance']):
color = 'g' if score == 'AUC' else 'k'
color = 'r' if score == 'Accuracy' else color
for j, par in enumerate(results['profile']['mean_test'][score].keys()):
if not i:
ax.append(plt.subplot(1,len(results['profile']['mean_test'][score].keys()), j+1))
print (par)
if par == 'hidden_layer_sizes':
x = np.unique([np.sum(results['gs_cv'][f'param_{par}'].data[i]) for i in \
range(results['gs_cv'][f'param_{par}'].data.size)])
else:
x = np.unique(results['gs_cv'][f'param_{par}'].data).astype(np.float)
for t in ['test', 'train']:
ax[j].plot(x, results['profile'][f'mean_{t:s}'][score][par],
color=color,
ls='-' if t == 'test' else '--',
label=score + " " + t
)
if t == 'test':
ax[j].fill_between(x, results['profile'][f'mean_{t:s}'][score][par] - \
0.5 * results['profile'][f'std_{t:s}'][score][par],
y2=results['profile'][f'mean_{t:s}'][score][par] + \
0.5 * results['profile'][f'std_{t:s}'][score][par],
color=color,
alpha=0.3)
if not i:
ax[j].set_xlabel(par)
else:
ax[j].legend()
#ax[j].set_ylim(0.92,1.001)
#ax[j].grid()
if j:
ax[j].tick_params(labelleft=False)
ax[j].set_ylim(0.1,11.)
plt.subplots_adjust(wspace = 0.05)
plt.figure(figsize=(4*3, 4))
ax = []
#for i, score in enumerate(['AUC', 'Accuracy', 'Precision']):
for i, score in enumerate(['AUC', 'Accuracy']):
color = 'g' if score == 'AUC' else 'k'
color = 'r' if score == 'Accuracy' else color
for j, par in enumerate(results['profile']['mean_test'][score].keys()):
if not i:
ax.append(plt.subplot(1,len(results['profile']['mean_test'][score].keys()), j+1))
print (par)
if par == 'hidden_layer_sizes':
x = np.unique([np.sum(results['gs_cv'][f'param_{par}'].data[i]) for i in \
range(results['gs_cv'][f'param_{par}'].data.size)])
else:
x = np.unique(results['gs_cv'][f'param_{par}'].data).astype(np.float)
for t in ['test', 'train']:
ax[j].plot(x, results['profile'][f'mean_{t:s}'][score][par],
color=color,
ls='-' if t == 'test' else '--',
label=score + " " + t
)
if t == 'test':
ax[j].fill_between(x, results['profile'][f'mean_{t:s}'][score][par] - \
0.5 * results['profile'][f'std_{t:s}'][score][par],
y2=results['profile'][f'mean_{t:s}'][score][par] + \
0.5 * results['profile'][f'std_{t:s}'][score][par],
color=color,
alpha=0.3)
if not i:
ax[j].set_xlabel(par)
else:
ax[j].legend()
#ax[j].set_ylim(0.92,1.001)
ax[j].set_ylim(1.,10.)
#ax[j].grid()
if j:
ax[j].tick_params(labelleft=False)
ax[j].set_ylim(0.96,1.01)
plt.subplots_adjust(wspace = 0.05)
###Output
n_estimators
learning_rate
max_depth
n_estimators
learning_rate
max_depth
###Markdown
Plot the **learning curve**
###Code
for score, val in results['learning_curve'].items():
if not score == 'Significance':
continue
train_sizes, train_scores, valid_scores = val
plt.plot(train_sizes, train_scores.mean(axis=1),
marker='o',
label=score + " Train",
ls='--',
color='g' if score == 'AUC' else 'k'
)
plt.fill_between(train_sizes,
train_scores.mean(axis=1) - np.sqrt(train_scores.var()),
y2=train_scores.mean(axis=1) + np.sqrt(train_scores.var()),
alpha=0.3,
color='g' if score == 'AUC' else 'k',
zorder=-1
)
plt.plot(train_sizes, valid_scores.mean(axis=1),
marker='o',
label=score + " valid", ls='-',
color= 'g' if score == 'AUC' else 'k',
)
plt.fill_between(train_sizes,
valid_scores.mean(axis=1) - np.sqrt(valid_scores.var()),
y2=valid_scores.mean(axis=1) + np.sqrt(valid_scores.var()),
alpha=0.3,
color='g' if score == 'AUC' else 'k',
zorder=-1
)
plt.legend(title=classifier)
plt.grid()
plt.xlabel("Sample Size")
plt.ylabel("Score")
for score in scoring.keys():
disp = ConfusionMatrixDisplay(np.array(results['confusion_matrix_test'][score]),
display_labels=['bkg', 'light'])
disp.plot(cmap=plt.cm.Blues,
#ax=ax,
values_format="d")
plt.title(f"{classifier}" + f" {score} ")
keys = ['bkg_pred', 'tp_efficiency', 'score']
for k in scoring.keys():
for ki in keys:
print("==== {0:s} : {1:s} ====".format(k, ki))
if not ki == 'score':
for t in ['test', 'train']:
if ki == 'bkg_pred':
c = t_tot_hrs * 3600.
else:
c = 1.
print("{0:s}: {1:.3e}".format(t,
results['{0:s}_{1:s}'.format(ki, t)][k]
/ c))
else:
for t in ['train', 'validation']:
print("{0:s}: {1:.3e}".format(t,
results['{0:s}_{1:s}'.format(ki, t)][k]))
print(results['tp_efficiency_test']['Significance'])
print(results['tp_efficiency_train']['Significance'])
print(results['score_validation']['Significance'])
print(results['score_train']['Significance'])
###Output
2.9181554313336378
10.218114061346759
###Markdown
Plot the sensitivity
###Code
def significance(n_b, obs_time, n_s = 2.8e-5, e_d=0.5, e_a=1.):
"""Signficance of a signal given some background rate and obs time"""
N_b = obs_time * n_b
N_s = obs_time * n_s
eps = e_d * e_a
S = 2. * (np.sqrt(eps * N_s + N_b) - np.sqrt(N_b))
return S
# sensitivity from Rikhav's result
print(significance(7.52e-6, obs_time=t_tot_hrs * 3600., n_s = 2.8e-5, e_d=0.5, e_a=0.82))
plt.figure(figsize=(4,3), dpi=150)
n_b = np.logspace(-7., -4., 100)
obs_time = 20. * 24. * 3600.
plt.semilogx(n_b, significance(n_b, obs_time=obs_time), lw=2)
plt.grid()
plt.yticks(np.arange(10))
plt.xlabel("Dark current rate (Hz)")
plt.ylabel("Signficance $S$")
###Output
_____no_output_____ |
MNIST/Session1/1_7_Increasing_Capacity.ipynb | ###Markdown
Import Libraries
###Code
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
###Output
_____no_output_____
###Markdown
Data TransformationsWe first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise.
###Code
# Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
###Output
_____no_output_____
###Markdown
Dataset and Creating Train/Test Split
###Code
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./data/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Dataloader Arguments & Test/Train Dataloaders
###Code
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
###Output
CUDA Available? True
###Markdown
Data StatisticsIt is important to know your data very well. Let's check some of the statistics around our data and how it actually looks like
###Code
# We'd need to convert it into Numpy! Remember above we have converted it into tensors already
train_data = train.train_data
train_data = train.transform(train_data.numpy())
print('[Train]')
print(' - Numpy Shape:', train.train_data.cpu().numpy().shape)
print(' - Tensor Shape:', train.train_data.size())
print(' - min:', torch.min(train_data))
print(' - max:', torch.max(train_data))
print(' - mean:', torch.mean(train_data))
print(' - std:', torch.std(train_data))
print(' - var:', torch.var(train_data))
dataiter = iter(train_loader)
images, labels = dataiter.next()
print(images.shape)
print(labels.shape)
# Let's visualize some of the images
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(images[0].numpy().squeeze(), cmap='gray_r')
###Output
###Markdown
MOREIt is important that we view as many images as possible. This is required to get some idea on image augmentation later on
###Code
figure = plt.figure()
num_of_images = 60
for index in range(1, num_of_images + 1):
plt.subplot(6, 10, index)
plt.axis('off')
plt.imshow(images[index].numpy().squeeze(), cmap='gray_r')
###Output
_____no_output_____
###Markdown
The modelLet's start with the model we first saw
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Input Block
self.convblock1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(10),
nn.ReLU()
) # output_size = 26
# CONVOLUTION BLOCK 1
self.convblock2 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(10),
nn.ReLU()
) # output_size = 24
self.convblock3 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=20, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(20),
nn.ReLU()
) # output_size = 22
# TRANSITION BLOCK 1
self.pool1 = nn.MaxPool2d(2, 2) # output_size = 11
self.convblock4 = nn.Sequential(
nn.Conv2d(in_channels=20, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
nn.BatchNorm2d(10),
nn.ReLU()
) # output_size = 11
# CONVOLUTION BLOCK 2
self.convblock5 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(10),
nn.ReLU()
) # output_size = 9
self.convblock6 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=20, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(20),
nn.ReLU()
) # output_size = 7
# OUTPUT BLOCK
self.convblock7 = nn.Sequential(
nn.Conv2d(in_channels=20, out_channels=32, kernel_size=(3, 3), padding=0, bias=False),
nn.BatchNorm2d(32),
nn.ReLU()
) # output_size = 5
self.convblock8 = nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
) # output_size = 5
self.gap = nn.Sequential(
nn.AvgPool2d(kernel_size=5)
) # output_size = 1
self.dropout = nn.Dropout(0.25)
def forward(self, x):
x = self.convblock1(x)
x = self.convblock2(x)
x = self.convblock3(x)
x = self.dropout(x)
x = self.pool1(x)
x = self.convblock4(x)
x = self.convblock5(x)
x = self.convblock6(x)
x = self.dropout(x)
x = self.convblock7(x)
x = self.convblock8(x)
x = self.gap(x)
x = x.view(-1, 10)
return F.log_softmax(x, dim=-1)
###Output
_____no_output_____
###Markdown
Model ParamsCan't emphasize on how important viewing Model Summary is. Unfortunately, there is no in-built model visualizer, so we have to take external help
###Code
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
###Output
Requirement already satisfied: torchsummary in /usr/local/lib/python3.6/dist-packages (1.5.1)
cuda
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 10, 26, 26] 90
BatchNorm2d-2 [-1, 10, 26, 26] 20
ReLU-3 [-1, 10, 26, 26] 0
Conv2d-4 [-1, 10, 24, 24] 900
BatchNorm2d-5 [-1, 10, 24, 24] 20
ReLU-6 [-1, 10, 24, 24] 0
Conv2d-7 [-1, 20, 22, 22] 1,800
BatchNorm2d-8 [-1, 20, 22, 22] 40
ReLU-9 [-1, 20, 22, 22] 0
Dropout-10 [-1, 20, 22, 22] 0
MaxPool2d-11 [-1, 20, 11, 11] 0
Conv2d-12 [-1, 10, 11, 11] 200
BatchNorm2d-13 [-1, 10, 11, 11] 20
ReLU-14 [-1, 10, 11, 11] 0
Conv2d-15 [-1, 10, 9, 9] 900
BatchNorm2d-16 [-1, 10, 9, 9] 20
ReLU-17 [-1, 10, 9, 9] 0
Conv2d-18 [-1, 20, 7, 7] 1,800
BatchNorm2d-19 [-1, 20, 7, 7] 40
ReLU-20 [-1, 20, 7, 7] 0
Dropout-21 [-1, 20, 7, 7] 0
Conv2d-22 [-1, 32, 5, 5] 5,760
BatchNorm2d-23 [-1, 32, 5, 5] 64
ReLU-24 [-1, 32, 5, 5] 0
Conv2d-25 [-1, 10, 5, 5] 320
AvgPool2d-26 [-1, 10, 1, 1] 0
================================================================
Total params: 11,994
Trainable params: 11,994
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.70
Params size (MB): 0.05
Estimated Total Size (MB): 0.75
----------------------------------------------------------------
###Markdown
Training and TestingLooking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs. Let's write train and test functions
###Code
from tqdm import tqdm
train_losses = []
test_losses = []
train_acc = []
test_acc = []
def train(model, device, train_loader, optimizer, epoch):
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_acc.append(100*correct/processed)
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_acc.append(100. * correct / len(test_loader.dataset))
###Output
_____no_output_____
###Markdown
Let's Train and test our model
###Code
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
EPOCHS = 20
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
fig, axs = plt.subplots(2,2,figsize=(15,10))
axs[0, 0].plot(train_losses)
axs[0, 0].set_title("Training Loss")
axs[1, 0].plot(train_acc[4000:])
axs[1, 0].set_title("Training Accuracy")
axs[0, 1].plot(test_losses)
axs[0, 1].set_title("Test Loss")
axs[1, 1].plot(test_acc)
axs[1, 1].set_title("Test Accuracy")
###Output
_____no_output_____ |
day12/Section 1 - Differential Privacy.ipynb | ###Markdown
Lesson: Toy Differential Privacy - Simple Database Queries In this section we're going to play around with Differential Privacy in the context of a database query. The database is going to be a VERY simple database with only one boolean column. Each row corresponds to a person. Each value corresponds to whether or not that person has a certain private attribute (such as whether they have a certain disease, or whether they are above/below a certain age). We are then going to learn how to know whether a database query over such a small database is differentially private or not - and more importantly - what techniques are at our disposal to ensure various levels of privacy First We Create a Simple DatabaseStep one is to create our database - we're going to do this by initializing a random list of 1s and 0s (which are the entries in our database). Note - the number of entries directly corresponds to the number of people in our database.
###Code
import torch
# the number of entries in our database
num_entries = 5000
db = torch.rand(num_entries) > 0.5
db
###Output
_____no_output_____
###Markdown
Project: Generate Parallel DatabasesKey to the definition of differenital privacy is the ability to ask the question "When querying a database, if I removed someone from the database, would the output of the query be any different?". Thus, in order to check this, we must construct what we term "parallel databases" which are simply databases with one entry removed. In this first project, I want you to create a list of every parallel database to the one currently contained in the "db" variable. Then, I want you to create a function which both:- creates the initial database (db)- creates all parallel databases
###Code
# try project here!
db = torch.rand(num_entries) > 0.5
db
db.shape
db[:5]
remove_index = 2
torch.cat((db[:remove_index], db[remove_index+1:]))[:5]
def get_parallel_db(db, remove_index):
return torch.cat((db[:remove_index], db[remove_index+1:]))
get_parallel_db(db, 2)[:5]
def get_parallel_dbs(db):
parallel_dbs = list()
for i in range(len(db)):
pdb = get_parallel_db(db, i)
parallel_dbs.append(pdb)
return parallel_dbs
pdbs = get_parallel_dbs(db)
print(len(pdbs))
print(len(pdbs[0]))
def create_db_and_parallels(num_entries):
db = torch.rand(num_entries) > 0.5
pdbs = get_parallel_dbs(db)
return db, pdbs
db, pdbs = create_db_and_parallels(10)
db, pdbs
###Output
_____no_output_____
###Markdown
Lesson: Towards Evaluating The Differential Privacy of a FunctionIntuitively, we want to be able to query our database and evaluate whether or not the result of the query is leaking "private" information. As mentioned previously, this is about evaluating whether the output of a query changes when we remove someone from the database. Specifically, we want to evaluate the *maximum* amount the query changes when someone is removed (maximum over all possible people who could be removed). So, in order to evaluate how much privacy is leaked, we're going to iterate over each person in the database and measure the difference in the output of the query relative to when we query the entire database. Just for the sake of argument, let's make our first "database query" a simple sum. Aka, we're going to count the number of 1s in the database.
###Code
db, pdbs = create_db_and_parallels(5000)
def query(db):
return db.sum()
full_db_result = query(db)
query(db)
i = 10
print('db[{}] = {}'.format(i, db[i]))
print('query(pdbs[{}]) = {}'.format(i, query(pdbs[i])))
sensitivity = 0
for pdb in pdbs:
pdb_result = query(pdb)
db_distance = torch.abs(pdb_result - full_db_result)
if(db_distance > sensitivity):
sensitivity = db_distance
sensitivity
###Output
_____no_output_____
###Markdown
Project - Evaluating the Privacy of a FunctionIn the last section, we measured the difference between each parallel db's query result and the query result for the entire database and then calculated the max value (which was 1). This value is called "sensitivity", and it corresponds to the function we chose for the query. Namely, the "sum" query will always have a sensitivity of exactly 1. However, we can also calculate sensitivity for other functions as well.Let's try to calculate sensitivity for the "mean" function.
###Code
# try this project here!
def sensitivity(query, n_entries):
db, pdbs = create_db_and_parallels(n_entries)
full_db_result = query(db)
max_distance = 0
for pdb in pdbs:
pdb_res = query(pdb)
db_distance = torch.abs(pdb_res - full_db_result)
if db_distance > max_distance: max_distance = db_distance
return max_distance
sens = sensitivity(query, 5000)
print('sensitivity = {}'.format(sens))
def query_mean(db):
return db.float().mean()
sens = sensitivity(query_mean, 10)
print(sens)
###Output
tensor(0.0667)
###Markdown
Wow! That sensitivity is WAY lower. Note the intuition here. "Sensitivity" is measuring how sensitive the output of the query is to a person being removed from the database. For a simple sum, this is always 1, but for the mean, removing a person is going to change the result of the query by rougly 1 divided by the size of the database (which is much smaller). Thus, "mean" is a VASTLY less "sensitive" function (query) than SUM. Project: Calculate L1 Sensitivity For ThresholdIn this first project, I want you to calculate the sensitivty for the "threshold" function. - First compute the sum over the database (i.e. sum(db)) and return whether that sum is greater than a certain threshold.- Then, I want you to create databases of size 10 and threshold of 5 and calculate the sensitivity of the function. - Finally, re-initialize the database 10 times and calculate the sensitivity each time.
###Code
# try this project here!
def query(db, threshold = 5):
return (db.sum() > threshold).float()
db, pdbs = create_db_and_parallels(10)
query(db, 5)
query(pdbs[0], 5)
for i in range(10):
sens_f = sensitivity(query, n_entries=10)
print(sens_f)
###Output
0
tensor(1.)
tensor(1.)
0
0
0
0
0
0
tensor(1.)
###Markdown
Lesson: A Basic Differencing AttackSadly none of the functions we've looked at so far are differentially private (despite them having varying levels of sensitivity). The most basic type of attack can be done as follows.Let's say we wanted to figure out a specific person's value in the database. All we would have to do is query for the sum of the entire database and then the sum of the entire database without that person! Project: Perform a Differencing Attack on Row 10In this project, I want you to construct a database and then demonstrate how you can use two different sum queries to explose the value of the person represented by row 10 in the database (note, you'll need to use a database with at least 10 rows)
###Code
# try this project here!
db, _ = create_db_and_parallels(100)
db
pdb = get_parallel_db(db, 6)
db[6]
# differencing attack using sum query
sum(db) - sum(pdb)
# differencing attack using mean query
sum(db).float()/len(db) - sum(pdb).float()/len(pdb)
sum(db)
# differencing attack using threshold
(sum(db) > 44) - (sum(pdb) > 44)
###Output
_____no_output_____
###Markdown
Project: Local Differential PrivacyAs you can see, the basic sum query is not differentially private at all! In truth, differential privacy always requires a form of randomness added to the query. Let me show you what I mean. Randomized Response (Local Differential Privacy)Let's say I have a group of people I wish to survey about a very taboo behavior which I think they will lie about (say, I want to know if they have ever committed a certain kind of crime). I'm not a policeman, I'm just trying to collect statistics to understand the higher level trend in society. So, how do we do this? One technique is to add randomness to each person's response by giving each person the following instructions (assuming I'm asking a simple yes/no question):- Flip a coin 2 times.- If the first coin flip is heads, answer honestly- If the first coin flip is tails, answer according to the second coin flip (heads for yes, tails for no)!Thus, each person is now protected with "plausible deniability". If they answer "Yes" to the question "have you committed X crime?", then it might becasue they actually did, or it might be becasue they are answering according to a random coin flip. Each person has a high degree of protection. Furthermore, we can recover the underlying statistics with some accuracy, as the "true statistics" are simply averaged with a 50% probability. Thus, if we collect a bunch of samples and it turns out that 60% of people answer yes, then we know that the TRUE distribution is actually centered around 70%, because 70% averaged wtih 50% (a coin flip) is 60% which is the result we obtained. However, it should be noted that, especially when we only have a few samples, this comes at the cost of accuracy. This tradeoff exists across all of Differential Privacy. The greater the privacy protection (plausible deniability) the less accurate the results. Let's implement this local DP for our database before!
###Code
# try this project here!
db, pdbs = create_db_and_parallels(100)
true_res = torch.mean(db.float())
true_res
db
first_coin_flip = (torch.rand(len(db)) > 0.5).float()
first_coin_flip
second_coin_flip = (torch.rand(len(db)) > 0.5).float()
second_coin_flip
db.float() * first_coin_flip # answer honestly
(1 - first_coin_flip) * second_coin_flip # answer randomly
augmented_db = db.float() * first_coin_flip + (1 - first_coin_flip) * second_coin_flip
torch.mean(augmented_db.float())
torch.mean(augmented_db.float()) * 2 - 0.5 # estimated result
def query(db):
true_res = torch.mean(db.float())
first_coin_flip = (torch.rand(len(db)) > 0.5).float()
second_coin_flip = (torch.rand(len(db)) > 0.5).float()
augmented_db = db.float() * first_coin_flip + (1 - first_coin_flip) * second_coin_flip
db_res = torch.mean(augmented_db.float()) * 2 - 0.5
return db_res, true_res
query(db)
db, pdbs = create_db_and_parallels(10)
private_res, true_res = query(db)
print('private_res = {}'.format(private_res))
print('true_res = {}'.format(true_res))
db, pdbs = create_db_and_parallels(10000)
private_res, true_res = query(db)
print('private_res = {}'.format(private_res))
print('true_res = {}'.format(true_res))
###Output
private_res = 0.49220001697540283
true_res = 0.5012999773025513
###Markdown
Project: Varying Amounts of NoiseIn this project, I want you to augment the randomized response query (the one we just wrote) to allow for varying amounts of randomness to be added. Specifically, I want you to bias the coin flip to be higher or lower and then run the same experiment. Note - this one is a bit tricker than you might expect. You need to both adjust the likelihood of the first coin flip AND the de-skewing at the end (where we create the "augmented_result" variable).
###Code
# try this project here!
def query(db, noise=0.2):
true_res = torch.mean(db.float())
first_coin_flip = (torch.rand(len(db)) > noise).float()
second_coin_flip = (torch.rand(len(db)) > 0.5).float()
augmented_db = db.float() * first_coin_flip + (1 - first_coin_flip) * second_coin_flip
db_res = (torch.mean(augmented_db.float()) - 0.5*noise)/(1 - noise)
return db_res, true_res
db, pdbs = create_db_and_parallels(10)
private_res, true_res = query(db)
print('private_res = {}'.format(private_res))
print('true_res = {}'.format(true_res))
db, pdbs = create_db_and_parallels(10000)
private_res, true_res = query(db)
print('private_res = {}'.format(private_res))
print('true_res = {}'.format(true_res))
db, pdbs = create_db_and_parallels(10)
private_res, true_res = query(db, 0.8)
print('private_res = {}'.format(private_res))
print('true_res = {}'.format(true_res))
db, pdbs = create_db_and_parallels(10000)
private_res, true_res = query(db, 0.8)
print('private_res = {}'.format(private_res))
print('true_res = {}'.format(true_res))
###Output
private_res = 0.5115000605583191
true_res = 0.5141000151634216
###Markdown
Lesson: The Formal Definition of Differential PrivacyThe previous method of adding noise was called "Local Differentail Privacy" because we added noise to each datapoint individually. This is necessary for some situations wherein the data is SO sensitive that individuals do not trust noise to be added later. However, it comes at a very high cost in terms of accuracy. However, alternatively we can add noise AFTER data has been aggregated by a function. This kind of noise can allow for similar levels of protection with a lower affect on accuracy. However, participants must be able to trust that no-one looked at their datapoints _before_ the aggregation took place. In some situations this works out well, in others (such as an individual hand-surveying a group of people), this is less realistic.Nevertheless, global differential privacy is incredibly important because it allows us to perform differential privacy on smaller groups of individuals with lower amounts of noise. Let's revisit our sum functions.
###Code
db, pdbs = create_db_and_parallels(100)
def query(db):
return torch.sum(db.float())
def M(db):
query(db) + noise
query(db)
###Output
_____no_output_____
###Markdown
So the idea here is that we want to add noise to the output of our function. We actually have two different kinds of noise we can add - Laplacian Noise or Gaussian Noise. However, before we do so at this point we need to dive into the formal definition of Differential Privacy. _Image From: "The Algorithmic Foundations of Differential Privacy" - Cynthia Dwork and Aaron Roth - https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf_ This definition does not _create_ differential privacy, instead it is a measure of how much privacy is afforded by a query M. Specifically, it's a comparison between running the query M on a database (x) and a parallel database (y). As you remember, parallel databases are defined to be the same as a full database (x) with one entry/person removed.Thus, this definition says that FOR ALL parallel databases, the maximum distance between a query on database (x) and the same query on database (y) will be e^epsilon, but that occasionally this constraint won't hold with probability delta. Thus, this theorem is called "epsilon delta" differential privacy. EpsilonLet's unpack the intuition of this for a moment. Epsilon Zero: If a query satisfied this inequality where epsilon was set to 0, then that would mean that the query for all parallel databases outputed the exact same value as the full database. As you may remember, when we calculated the "threshold" function, often the Sensitivity was 0. In that case, the epsilon also happened to be zero.Epsilon One: If a query satisfied this inequality with epsilon 1, then the maximum distance between all queries would be 1 - or more precisely - the maximum distance between the two random distributions M(x) and M(y) is 1 (because all these queries have some amount of randomness in them, just like we observed in the last section). DeltaDelta is basically the probability that epsilon breaks. Namely, sometimes the epsilon is different for some queries than it is for others. For example, you may remember when we were calculating the sensitivity of threshold, most of the time sensitivity was 0 but sometimes it was 1. Thus, we could calculate this as "epsilon zero but non-zero delta" which would say that epsilon is perfect except for some probability of the time when it's arbitrarily higher. Note that this expression doesn't represent the full tradeoff between epsilon and delta. Lesson: How To Add Noise for Global Differential PrivacyIn this lesson, we're going to learn about how to take a query and add varying amounts of noise so that it satisfies a certain degree of differential privacy. In particular, we're going to leave behind the Local Differential privacy previously discussed and instead opt to focus on Global differential privacy. So, to sum up, this lesson is about adding noise to the output of our query so that it satisfies a certain epsilon-delta differential privacy threshold.There are two kinds of noise we can add - Gaussian Noise or Laplacian Noise. Generally speaking Laplacian is better, but both are still valid. Now to the hard question... How much noise should we add?The amount of noise necessary to add to the output of a query is a function of four things:- the type of noise (Gaussian/Laplacian)- the sensitivity of the query/function- the desired epsilon (ε)- the desired delta (δ)Thus, for each type of noise we're adding, we have different way of calculating how much to add as a function of sensitivity, epsilon, and delta. We're going to focus on Laplacian noise. Laplacian noise is increased/decreased according to a "scale" parameter b. We choose "b" based on the following formula.b = sensitivity(query) / epsilonIn other words, if we set b to be this value, then we know that we will have a privacy leakage of <= epsilon. Furthermore, the nice thing about Laplace is that it guarantees this with delta == 0. There are some tunings where we can have very low epsilon where delta is non-zero, but we'll ignore them for now. Querying Repeatedly- if we query the database multiple times - we can simply add the epsilons (Even if we change the amount of noise and their epsilons are not the same). Project: Create a Differentially Private QueryIn this project, I want you to take what you learned in the previous lesson and create a query function which sums over the database and adds just the right amount of noise such that it satisfies an epsilon constraint. Write a query for both "sum" and for "mean". Ensure that you use the correct sensitivity measures for both.
###Code
# try this project here!
###Output
_____no_output_____
###Markdown
Lesson: Differential Privacy for Deep LearningSo in the last lessons you may have been wondering - what does all of this have to do with Deep Learning? Well, these same techniques we were just studying form the core primitives for how Differential Privacy provides guarantees in the context of Deep Learning. Previously, we defined perfect privacy as "a query to a database returns the same value even if we remove any person from the database", and used this intuition in the description of epsilon/delta. In the context of deep learning we have a similar standard.Training a model on a dataset should return the same model even if we remove any person from the dataset.Thus, we've replaced "querying a database" with "training a model on a dataset". In essence, the training process is a kind of query. However, one should note that this adds two points of complexity which database queries did not have: 1. do we always know where "people" are referenced in the dataset? 2. neural models rarely never train to the same output model, even on identical dataThe answer to (1) is to treat each training example as a single, separate person. Strictly speaking, this is often overly zealous as some training examples have no relevance to people and others may have multiple/partial (consider an image with multiple people contained within it). Thus, localizing exactly where "people" are referenced, and thus how much your model would change if people were removed, is challenging.The answer to (2) is also an open problem - but several interesitng proposals have been made. We're going to focus on one of the most popular proposals, PATE. An Example Scenario: A Health Neural NetworkFirst we're going to consider a scenario - you work for a hospital and you have a large collection of images about your patients. However, you don't know what's in them. You would like to use these images to develop a neural network which can automatically classify them, however since your images aren't labeled, they aren't sufficient to train a classifier. However, being a cunning strategist, you realize that you can reach out to 10 partner hospitals which DO have annotated data. It is your hope to train your new classifier on their datasets so that you can automatically label your own. While these hospitals are interested in helping, they have privacy concerns regarding information about their patients. Thus, you will use the following technique to train a classifier which protects the privacy of patients in the other hospitals.- 1) You'll ask each of the 10 hospitals to train a model on their own datasets (All of which have the same kinds of labels)- 2) You'll then use each of the 10 partner models to predict on your local dataset, generating 10 labels for each of your datapoints- 3) Then, for each local data point (now with 10 labels), you will perform a DP query to generate the final true label. This query is a "max" function, where "max" is the most frequent label across the 10 labels. We will need to add laplacian noise to make this Differentially Private to a certain epsilon/delta constraint.- 4) Finally, we will retrain a new model on our local dataset which now has labels. This will be our final "DP" model.So, let's walk through these steps. I will assume you're already familiar with how to train/predict a deep neural network, so we'll skip steps 1 and 2 and work with example data. We'll focus instead on step 3, namely how to perform the DP query for each example using toy data.So, let's say we have 10,000 training examples, and we've got 10 labels for each example (from our 10 "teacher models" which were trained directly on private data). Each label is chosen from a set of 10 possible labels (categories) for each image.
###Code
import numpy as np
num_teachers = 10 # we're working with 10 partner hospitals
num_examples = 10000 # the size of OUR dataset
num_labels = 10 # number of lablels for our classifier
preds = (np.random.rand(num_teachers, num_examples) * num_labels).astype(int).transpose(1,0) # fake predictions
new_labels = list()
for an_image in preds:
label_counts = np.bincount(an_image, minlength=num_labels)
epsilon = 0.1
beta = 1 / epsilon
for i in range(len(label_counts)):
label_counts[i] += np.random.laplace(0, beta, 1)
new_label = np.argmax(label_counts)
new_labels.append(new_label)
# new_labels
###Output
_____no_output_____
###Markdown
PATE Analysis
###Code
labels = np.array([9, 9, 3, 6, 9, 9, 9, 9, 8, 2])
counts = np.bincount(labels, minlength=10)
query_result = np.argmax(counts)
query_result
from syft.frameworks.torch.differential_privacy import pate
num_teachers, num_examples, num_labels = (100, 100, 10)
preds = (np.random.rand(num_teachers, num_examples) * num_labels).astype(int) #fake preds
indices = (np.random.rand(num_examples) * num_labels).astype(int) # true answers
preds[:,0:10] *= 0
data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5)
assert data_dep_eps < data_ind_eps
data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5)
print("Data Independent Epsilon:", data_ind_eps)
print("Data Dependent Epsilon:", data_dep_eps)
preds[:,0:50] *= 0
data_dep_eps, data_ind_eps = pate.perform_analysis(teacher_preds=preds, indices=indices, noise_eps=0.1, delta=1e-5, moments=20)
print("Data Independent Epsilon:", data_ind_eps)
print("Data Dependent Epsilon:", data_dep_eps)
###Output
Data Independent Epsilon: 11.756462732485115
Data Dependent Epsilon: 0.9029013677789843
###Markdown
Where to Go From HereRead: - Algorithmic Foundations of Differential Privacy: https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf - Deep Learning with Differential Privacy: https://arxiv.org/pdf/1607.00133.pdf - The Ethical Algorithm: https://www.amazon.com/Ethical-Algorithm-Science-Socially-Design/dp/0190948205 Topics: - The Exponential Mechanism - The Moment's Accountant - Differentially Private Stochastic Gradient DescentAdvice: - For deployments - stick with public frameworks! - Join the Differential Privacy Community - Don't get ahead of yourself - DP is still in the early days Section Project:For the final project for this section, you're going to train a DP model using this PATE method on the MNIST dataset, provided below.
###Code
import torchvision.datasets as datasets
mnist_trainset = datasets.MNIST(root='./data', train=True, download=True, transform=None)
train_data = mnist_trainset.train_data
train_targets = mnist_trainset.train_labels
test_data = mnist_trainset.test_data
test_targets = mnist_trainset.test_labels
###Output
/Users/atrask/anaconda/lib/python3.6/site-packages/torchvision/datasets/mnist.py:58: UserWarning: test_data has been renamed data
warnings.warn("test_data has been renamed data")
/Users/atrask/anaconda/lib/python3.6/site-packages/torchvision/datasets/mnist.py:48: UserWarning: test_labels has been renamed targets
warnings.warn("test_labels has been renamed targets")
|
09b_pytorch.ipynb | ###Markdown
Music machine learning - Pytorch Author: Philippe Esling ([email protected])In this course we will cover1. A [quick introduction](intro) to Pytorch 2. An implementation for [advanced models](models) implementation3. A quick proposal for [attention](attention) layers Introduction to Pytorch`Pytorch` is a Python-based scientific computing package targeted at deep learning, which provides a very large flexibility and easeness of use for GPU calculation. `Pytorch` is constructed around the concept of `Tensor`, which is very similar to `numpy.ndarray`, but can be seamlessly run on GPU.Here are some examples of different `Tensor` creation
###Code
import torch
# Create a 5 x 3 Tensor of zeros
x = torch.empty(5, 3)
# Create a 64 x 3 x 32 x 32 random Tensor
x = torch.rand(64, 3, 32, 32)
# Create a Tensor of zeros with _long_ type
x = torch.zeros(10, 10, dtype=torch.long)
# Construct a Tensor from the data
x = torch.tensor([5.5, 3])
###Output
_____no_output_____
###Markdown
or create a tensor based on an existing tensor. These methodswill reuse properties of the input tensor, e.g. dtype, unlessnew values are provided by user
###Code
x = x.new_ones(8, 2, dtype=torch.double) # new_* methods take in sizes
x = torch.randn_like(x, dtype=torch.float) # override dtype!
print(x.size())
print(x.shape[0])
###Output
_____no_output_____
###Markdown
Arithmetic operationsTensors provide access to a transparent library of arithmetic operations
###Code
x = torch.rand(8, 2)
y = torch.rand(8, 2)
z = torch.rand(2, 4)
# Equivalent additions
print(x + y)
print(torch.add(x, y))
# Add in place
x.add_(y)
print(x)
# Put in target Tensor
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
# Element_wise multiplication
print(x * y)
# Matrix product
print(x @ z)
###Output
_____no_output_____
###Markdown
Slicing and resizingYou can slice tensors using the usual Python operators. For resizing and reshaping tensor, you can use ``torch.view`` or ``torch.reshape``
###Code
print(x[:, 1])
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
###Output
_____no_output_____
###Markdown
If you have a one element tensor, use ``.item()`` to get the value as aPython number
###Code
x = torch.randn(1)
print(x)
print(x.item())
###Output
_____no_output_____
###Markdown
Tensors have more than 100 operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, which are all described at [https://pytorch.org/docs/torch](https://pytorch.org/docs/torch) Numpy bridgeConverting a Torch Tensor to a Numpy array and vice versa is extremely simple. Note that the Pytorch Tensor and Numpy array **will share their underlying memory locations** (if the Tensor is on CPU), and changing one will change the other.
###Code
a = torch.ones(5)
b = a.numpy()
a.add_(1)
print(a)
print(b)
###Output
_____no_output_____
###Markdown
Going GPUTensors can be moved onto any device using the ``.to`` method.
###Code
# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!
###Output
_____no_output_____
###Markdown
Computation GraphsThe concept of a computation graph is essential to efficient deep learning programming, because it allows you to not have to write the back propagation gradients yourself. A computation graph is simply a specification of how your data is combined to give you the output (the forward pass). Since the graph totally specifies what parameters were involved with which operations, it contains enough information to compute derivatives. The fundamental flag ``requires_grad`` allows to specify which variables are going to need differentiation in all these operations. If ``requires_grad=True``, the Tensor object keeps track of how it was created.
###Code
# Tensor factory methods have a ``requires_grad`` flag
x = torch.tensor([1., 2., 3], requires_grad=True)
# With requires_grad=True, we can still do all the operations
y = torch.tensor([4., 5., 6], requires_grad=True)
z = x + y
print(z)
# But z now knows something extra.
print(z.grad_fn)
###Output
_____no_output_____
###Markdown
Therefore, `z` knows that it is the direct result of an addition. Furthermore, if we keep following z.grad_fn, we can even find back both `x` and `y`. But how does that help us compute a gradient?
###Code
# Lets sum up all the entries in z
s = z.sum()
print(s)
print(s.grad_fn)
###Output
_____no_output_____
###Markdown
So now, what is the derivative of this sum with respect to the firstcomponent of x? In math, we want\begin{align}\frac{\partial s}{\partial x_0}\end{align}Well, s knows that it was created as a sum of the tensor z. z knowsthat it was the sum x + y. So\begin{align}s = \overbrace{x_0 + y_0}^\text{$z_0$} + \overbrace{x_1 + y_1}^\text{$z_1$} + \overbrace{x_2 + y_2}^\text{$z_2$}\end{align}And so s contains enough information to determine that the derivative we want is 1. We can have Pytorch compute the gradient, and see that we were right:**Note** : If you run this block multiple times, the gradient will increment. That is because Pytorch *accumulates* the gradient into the .grad property, since for many models this is very convenient.
###Code
# calling .backward() on any variable will run backprop, starting from it.
s.backward()
print(x.grad)
###Output
_____no_output_____
###Markdown
Understanding what is going on in the block below is crucial for being asuccessful programmer in deep learning.
###Code
x = torch.randn(2, 2)
y = torch.randn(2, 2)
# By default, user created Tensors have ``requires_grad=False``
print(x.requires_grad, y.requires_grad)
z = x + y
# So you can't backprop through z
print(z.grad_fn)
# ``.requires_grad_( ... )`` changes an existing Tensor's ``requires_grad``
x = x.requires_grad_()
y = y.requires_grad_()
# z contains enough information to compute gradients, as we saw above
z = x + y
print(z.grad_fn)
# If any input to an operation has ``requires_grad=True``, so will the output
print(z.requires_grad)
# Now z has the computation history, which we can **detach**
new_z = z.detach()
# Which means that we have no gradient attached anymore
print(new_z.grad_fn)
###Output
_____no_output_____
###Markdown
You can also stop autograd from tracking history on Tensorswith ``.requires_grad=True`` by wrapping the code block in``with torch.no_grad():``
###Code
print(x.requires_grad)
print((x ** 2).requires_grad)
with torch.no_grad():
print((x ** 2).requires_grad)
###Output
_____no_output_____
###Markdown
Defining networks Here, we briefly recall that in `PyTorch`, the `nn` package provides higher-level abstractions over raw computational graphs that are useful for building neural networks. The `nn` package defines a set of `Modules`, which are roughly equivalent to neural network layers. A `Module` receives input `Tensors` and computes output `Tensors`, but may also hold internal state such as `Tensors` containing learnable parameters. In the following example, we use the `nn` package to show how easy it is to instantiate a three-layer network
###Code
import torch
import torch.nn as nn
# Define the input dimensions
in_size = 1000
# Number of neurons in a layer
hidden_size = 100
# Output (target) dimension
output_size = 10
# Use the nn package to define our model and loss function.
model = torch.nn.Sequential(
nn.Linear(in_size, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, hidden_size),
nn.Tanh(),
nn.Linear(hidden_size, output_size),
nn.Softmax()
)
###Output
_____no_output_____
###Markdown
As we have seen in the slides, we can as easily mix between pre-defined modules and arithmetic operations. Here, we will define a *residual* block, and then combine them in a more complex network
###Code
class ResBlock(nn.Module):
def __init__(self, dim, dim_res=32):
super().__init__()
self.block = nn.Sequential(
nn.Conv2d(dim, dim_res, 3, 1, 1),
nn.ReLU(True),
nn.Conv2d(dim_res, dim, 1),
nn.ReLU(True)
)
def forward(self, x):
return x + self.block(x)
model = nn.Sequential(
ResBlock(64, 32),
ResBlock(64, 32),
)
###Output
_____no_output_____
###Markdown
Defining our own layersIn the following, we re-implement the *attention* layer, which is the basis of the infamous `Transformer` models.
###Code
class AttentionLayer(nn.Module):
def __init__(self, n_hidden):
super(ChordLevelAttention, self).__init__()
self.mlp = nn.Linear(n_hidden, n_hidden)
self.u_w = nn.Parameter(torch.rand(n_hidden))
def forward(self, X):
# get the hidden representation of the sequence
u_it = F.tanh(self.mlp(X))
# get attention weights for each timestep
alpha = F.softmax(torch.matmul(u_it, self.u_w), dim=1)
# get the weighted sum of the sequence
out = torch.sum(torch.matmul(alpha, X), dim=1)
return out, alpha
###Output
_____no_output_____
###Markdown
Music machine learning - Pytorch Author: Philippe Esling ([email protected])In this course we will cover1. A [quick introduction](intro) to Pytorch 2. An implementation for [advanced models](models) implementation3. A quick proposal for [attention](attention) layers Introduction to Pytorch`Pytorch` is a Python-based scientific computing package targeted at deep learning, which provides a very large flexibility and easeness of use for GPU calculation. `Pytorch` is constructed around the concept of `Tensor`, which is very similar to `numpy.ndarray`, but can be seamlessly run on GPU.Here are some examples of different `Tensor` creation
###Code
import torch
# Create a 5 x 3 Tensor of zeros
x = torch.empty(5, 3)
# Create a 64 x 3 x 32 x 32 random Tensor
x = torch.rand(64, 3, 32, 32)
# Create a Tensor of zeros with _long_ type
x = torch.zeros(10, 10, dtype=torch.long)
# Construct a Tensor from the data
x = torch.tensor([5.5, 3])
###Output
_____no_output_____
###Markdown
or create a tensor based on an existing tensor. These methodswill reuse properties of the input tensor, e.g. dtype, unlessnew values are provided by user
###Code
x = x.new_ones(8, 2, dtype=torch.double) # new_* methods take in sizes
x = torch.randn_like(x, dtype=torch.float) # override dtype!
print(x.size())
print(x.shape[0])
###Output
torch.Size([8, 2])
8
###Markdown
Arithmetic operationsTensors provide access to a transparent library of arithmetic operations
###Code
x = torch.rand(8, 2)
y = torch.rand(8, 2)
z = torch.rand(2, 4)
# Equivalent additions
print(x + y)
print(torch.add(x, y))
# Add in place
x.add_(y)
print(x)
# Put in target Tensor
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
# Element_wise multiplication
print(x * y)
# Matrix product
print(x @ z)
###Output
tensor([[0.5749, 0.5255],
[0.9960, 0.9434],
[1.7060, 1.3024],
[1.4466, 0.8099],
[1.0718, 1.0909],
[0.6841, 0.9996],
[0.5667, 1.8098],
[1.3244, 1.0366]])
tensor([[0.5749, 0.5255],
[0.9960, 0.9434],
[1.7060, 1.3024],
[1.4466, 0.8099],
[1.0718, 1.0909],
[0.6841, 0.9996],
[0.5667, 1.8098],
[1.3244, 1.0366]])
tensor([[0.5749, 0.5255],
[0.9960, 0.9434],
[1.7060, 1.3024],
[1.4466, 0.8099],
[1.0718, 1.0909],
[0.6841, 0.9996],
[0.5667, 1.8098],
[1.3244, 1.0366]])
tensor([[0.5919, 0.9298],
[1.1173, 1.4515],
[2.5675, 1.7883],
[2.0762, 0.9926],
[1.5930, 1.7934],
[1.0501, 1.9560],
[0.6003, 2.7626],
[2.2344, 1.6342]])
tensor([[0.0097, 0.2125],
[0.1208, 0.4794],
[1.4697, 0.6328],
[0.9108, 0.1480],
[0.5586, 0.7664],
[0.2503, 0.9560],
[0.0190, 1.7244],
[1.2052, 0.6195]])
tensor([[0.6307, 0.5176, 0.4123, 0.0590],
[1.1036, 0.9269, 0.7322, 0.1031],
[1.7854, 1.3017, 1.0830, 0.1676],
[1.4153, 0.8352, 0.7575, 0.1336],
[1.2130, 1.0665, 0.8294, 0.1131],
[0.8757, 0.9574, 0.6950, 0.0809],
[1.0539, 1.6884, 1.1119, 0.0952],
[1.3946, 1.0338, 0.8547, 0.1308]])
###Markdown
Slicing and resizingYou can slice tensors using the usual Python operators. For resizing and reshaping tensor, you can use ``torch.view`` or ``torch.reshape``
###Code
print(x[:, 1])
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
###Output
tensor([-0.4963, -0.3877, 0.1876, -0.5845, 1.5996, 0.9076, 2.9362, -1.5117])
torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
###Markdown
If you have a one element tensor, use ``.item()`` to get the value as aPython number
###Code
x = torch.randn(1)
print(x)
print(x.item())
###Output
_____no_output_____
###Markdown
Tensors have more than 100 operations, including transposing, indexing, slicing, mathematical operations, linear algebra, random numbers, which are all described at [https://pytorch.org/docs/torch](https://pytorch.org/docs/torch) Numpy bridgeConverting a Torch Tensor to a Numpy array and vice versa is extremely simple. Note that the Pytorch Tensor and Numpy array **will share their underlying memory locations** (if the Tensor is on CPU), and changing one will change the other.
###Code
a = torch.ones(5)
b = a.numpy()
a.add_(1)
print(a)
print(b)
###Output
tensor([2., 2., 2., 2., 2.])
[2. 2. 2. 2. 2.]
###Markdown
Going GPUTensors can be moved onto any device using the ``.to`` method.
###Code
# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!
###Output
_____no_output_____
###Markdown
Computation GraphsThe concept of a computation graph is essential to efficient deep learning programming, because it allows you to not have to write the back propagation gradients yourself. A computation graph is simply a specification of how your data is combined to give you the output (the forward pass). Since the graph totally specifies what parameters were involved with which operations, it contains enough information to compute derivatives. The fundamental flag ``requires_grad`` allows to specify which variables are going to need differentiation in all these operations. If ``requires_grad=True``, the Tensor object keeps track of how it was created.
###Code
# Tensor factory methods have a ``requires_grad`` flag
x = torch.tensor([1., 2., 3], requires_grad=True)
# With requires_grad=True, we can still do all the operations
y = torch.tensor([4., 5., 6], requires_grad=True)
z = x + y
print(z)
# But z now knows something extra.
print(z.grad_fn)
###Output
tensor([5., 7., 9.], grad_fn=<AddBackward0>)
<AddBackward0 object at 0x12d601fd0>
###Markdown
Therefore, `z` knows that it is the direct result of an addition. Furthermore, if we keep following z.grad_fn, we can even find back both `x` and `y`. But how does that help us compute a gradient?
###Code
# Lets sum up all the entries in z
s = z.sum()
print(s)
print(s.grad_fn)
###Output
tensor(21., grad_fn=<SumBackward0>)
<SumBackward0 object at 0x12d5f4d90>
###Markdown
So now, what is the derivative of this sum with respect to the firstcomponent of x? In math, we want\begin{align}\frac{\partial s}{\partial x_0}\end{align}Well, s knows that it was created as a sum of the tensor z. z knowsthat it was the sum x + y. So\begin{align}s = \overbrace{x_0 + y_0}^\text{$z_0$} + \overbrace{x_1 + y_1}^\text{$z_1$} + \overbrace{x_2 + y_2}^\text{$z_2$}\end{align}And so s contains enough information to determine that the derivative we want is 1. We can have Pytorch compute the gradient, and see that we were right:**Note** : If you run this block multiple times, the gradient will increment. That is because Pytorch *accumulates* the gradient into the .grad property, since for many models this is very convenient.
###Code
# calling .backward() on any variable will run backprop, starting from it.
s.backward()
print(x.grad)
###Output
tensor([1., 1., 1.])
###Markdown
Understanding what is going on in the block below is crucial for being asuccessful programmer in deep learning.
###Code
x = torch.randn(2, 2)
y = torch.randn(2, 2)
# By default, user created Tensors have ``requires_grad=False``
print(x.requires_grad, y.requires_grad)
z = x + y
# So you can't backprop through z
print(z.grad_fn)
# ``.requires_grad_( ... )`` changes an existing Tensor's ``requires_grad``
x = x.requires_grad_()
y = y.requires_grad_()
# z contains enough information to compute gradients, as we saw above
z = x + y
print(z.grad_fn)
# If any input to an operation has ``requires_grad=True``, so will the output
print(z.requires_grad)
# Now z has the computation history, which we can **detach**
new_z = z.detach()
# Which means that we have no gradient attached anymore
print(new_z.grad_fn)
###Output
False False
None
<AddBackward0 object at 0x12d60ca10>
True
None
###Markdown
You can also stop autograd from tracking history on Tensorswith ``.requires_grad=True`` by wrapping the code block in``with torch.no_grad():``
###Code
print(x.requires_grad)
print((x ** 2).requires_grad)
with torch.no_grad():
print((x ** 2).requires_grad)
###Output
True
True
False
###Markdown
Defining networks Here, we briefly recall that in `PyTorch`, the `nn` package provides higher-level abstractions over raw computational graphs that are useful for building neural networks. The `nn` package defines a set of `Modules`, which are roughly equivalent to neural network layers. A `Module` receives input `Tensors` and computes output `Tensors`, but may also hold internal state such as `Tensors` containing learnable parameters. In the following example, we use the `nn` package to show how easy it is to instantiate a three-layer network
###Code
import torch
import torch.nn as nn
# Define the input dimensions
in_size = 1000
# Number of neurons in a layer
hidden_size = 100
# Output (target) dimension
output_size = 10
# Use the nn package to define our model and loss function.
model = torch.nn.Sequential(
nn.Linear(in_size, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, hidden_size),
nn.Tanh(),
nn.Linear(hidden_size, output_size),
nn.Softmax()
)
###Output
_____no_output_____
###Markdown
As we have seen in the slides, we can as easily mix between pre-defined modules and arithmetic operations. Here, we will define a *residual* block, and then combine them in a more complex network
###Code
class ResBlock(nn.Module):
def __init__(self, dim, dim_res=32):
super().__init__()
self.block = nn.Sequential(
nn.Conv2d(dim, dim_res, 3, 1, 1),
nn.ReLU(True),
nn.Conv2d(dim_res, dim, 1),
nn.ReLU(True)
)
def forward(self, x):
return x + self.block(x)
model = nn.Sequential(
ResBlock(64, 32),
ResBlock(64, 32),
)
###Output
_____no_output_____
###Markdown
Defining our own layersIn the following, we re-implement the *attention* layer, which is the basis of the infamous `Transformer` models.
###Code
class AttentionLayer(nn.Module):
def __init__(self, n_hidden):
super(ChordLevelAttention, self).__init__()
self.mlp = nn.Linear(n_hidden, n_hidden)
self.u_w = nn.Parameter(torch.rand(n_hidden))
def forward(self, X):
# get the hidden representation of the sequence
u_it = F.tanh(self.mlp(X))
# get attention weights for each timestep
alpha = F.softmax(torch.matmul(u_it, self.u_w), dim=1)
# get the weighted sum of the sequence
out = torch.sum(torch.matmul(alpha, X), dim=1)
return out, alpha
###Output
_____no_output_____ |
lecture9/Plotly_Presentation.ipynb | ###Markdown
Overview of Plotly for Python**Victoria Gregory**4/1/2016 What is Plotly?* `plotly.js`: online JavaScript graphing library* Today I'll talk about its Python client* Both `plotly.js` and the Python library are free and open-source* Similar libraries for Julia, R, and Matlab What can I do with Plotly?* Useful for data visualization and fully interactive graphics* Standard graphics interface across languages* Easily shareable online* 20 types of charts, including statistical plots, 3D charts, and maps* [Complete list here](https://plot.ly/python/) Just a few examples...
###Code
import plotly.tools as tls
tls.embed('https://plot.ly/~AnnaG/1/nfl-defensive-player-size-2013-season/')
tls.embed('https://plot.ly/~chris/7378/relative-number-of-311-complaints-by-city/')
tls.embed('https://plot.ly/~empet/2922/a-scoreboard-for-republican-candidates-as-of-august-17-2015-annotated-heatmap/')
tls.embed('https://plot.ly/~vgregory757/2/_2014-us-city-populations-click-legend-to-toggle-traces/')
###Output
_____no_output_____
###Markdown
Getting started* Easy to install: `pip install plotly`* How to save and view files? * Can work offline and save as `.html` files to open on web browser * Jupyter notebook * Upload to online account for easy sharing: import statement automatically signs you in How It Works* Graph objects * Same structure as native Python dictionaries and lists * Defined as new classes * Every Plotly plot type has its own graph object, i.e., `Scatter`, `Bar`, `Histogram` * All information in a Plotly plot is contained in a `Figure` object, which contains * a `Data` object: stores data and style options, i.e., setting the line color * a `Layout` object: for aesthetic features outside the plotting area, i.e., setting the title* *trace*: refers to a set of data meant to be plotted as a whole (like an $x$ and $y$ pairing)* Interactivity is automatic! Line/Scatter PlotsThe following `import` statements load the three main modules:
###Code
# (*) Tools to communicate with Plotly's server
import plotly.plotly as py
# (*) Useful Python/Plotly tools
import plotly.tools as tls
# (*) Graph objects to piece together your Plotly plots
import plotly.graph_objs as go
###Output
_____no_output_____
###Markdown
The following code will make a simple line and scatter plot:
###Code
# Create random data with numpy
import numpy as np
N = 100
random_x = np.linspace(0, 1, N)
random_y0 = np.random.randn(N)+5
random_y1 = np.random.randn(N)
random_y2 = np.random.randn(N)-5
# (1.1) Make a 1st Scatter object
trace0 = go.Scatter(
x = random_x,
y = random_y0,
mode = 'markers',
name = '$\mu = 5$',
hoverinfo='x+y' # choosing what to show on hover
)
# (1.2) Make a 2nd Scatter object
trace1 = go.Scatter(
x = random_x,
y = random_y1,
mode = 'lines+markers',
name = '$\mu = 0$',
hoverinfo='x+y'
)
# (1.3) Make a 3rd Scatter object
trace2 = go.Scatter(
x = random_x,
y = random_y2,
mode = 'lines',
name = '$\mu = -5$',
hoverinfo='x+y'
)
# (2) Make Data object
# Data is list-like, must use [ ]
data = go.Data([trace0, trace1, trace2])
# (3) Make Layout object (Layout is dict-like)
layout = go.Layout(title='$\\text{Some scatter objects distributed as } \
\mathcal{N}(\mu,1)$',
xaxis=dict(title='x-axis label'),
yaxis=dict(title='y-axis label'),
showlegend=True)
# (4) Make Figure object (Figure is dict-like)
fig = go.Figure(data=data, layout=layout)
print(fig) # print the figure object in notebook
###Output
{'layout': {'showlegend': True, 'yaxis': {'title': 'y-axis label'}, 'xaxis': {'title': 'x-axis label'}, 'title': '$\\text{Some scatter objects distributed as } \\mathcal{N}(\\mu,1)$'}, 'data': [{'name': '$\\mu = 5$', 'mode': 'markers', 'hoverinfo': 'x+y', 'y': array([ 4.04668017, 6.19854098, 4.62444061, 5.09242471, 3.66649515,
6.58469017, 5.19130891, 6.6651075 , 4.69078908, 5.8217442 ,
6.6377433 , 3.50985828, 5.91740602, 3.42162452, 5.58354415,
4.42149207, 5.12235742, 4.68431865, 3.85567028, 6.45240545,
4.39855931, 4.34472981, 4.29497064, 5.50473226, 5.21625372,
4.46215315, 5.00053252, 5.90014207, 5.41637191, 5.51115194,
4.56673328, 6.03843503, 5.56792862, 5.7704772 , 3.71154776,
4.7388194 , 6.08732718, 5.42687078, 6.58736437, 5.67774481,
5.74155225, 5.91060711, 4.88168997, 5.26141665, 4.70980663,
4.54812936, 4.90691582, 4.81522669, 4.71825569, 5.55335487,
4.08928611, 5.63225122, 5.94667951, 4.39225355, 4.74958193,
3.92407449, 7.07111905, 5.495516 , 4.09448008, 5.77320071,
4.64677053, 3.10583353, 4.98918102, 4.48544076, 4.76270418,
3.07037985, 5.43110144, 4.39505671, 6.45377374, 4.79077288,
4.52728717, 3.82026289, 4.53460106, 4.49427596, 4.43350851,
3.52965903, 4.87801779, 3.97288821, 5.3209264 , 5.17132472,
5.38980652, 4.67934665, 5.90043308, 4.47941995, 4.04320281,
4.05764549, 5.87833999, 5.42252719, 4.78779398, 4.80676704,
5.21730358, 4.3471276 , 6.28070553, 5.28354741, 2.30351284,
5.10559463, 4.18524983, 6.19731501, 5.62587121, 4.37582245]), 'x': array([ 0. , 0.01010101, 0.02020202, 0.03030303, 0.04040404,
0.05050505, 0.06060606, 0.07070707, 0.08080808, 0.09090909,
0.1010101 , 0.11111111, 0.12121212, 0.13131313, 0.14141414,
0.15151515, 0.16161616, 0.17171717, 0.18181818, 0.19191919,
0.2020202 , 0.21212121, 0.22222222, 0.23232323, 0.24242424,
0.25252525, 0.26262626, 0.27272727, 0.28282828, 0.29292929,
0.3030303 , 0.31313131, 0.32323232, 0.33333333, 0.34343434,
0.35353535, 0.36363636, 0.37373737, 0.38383838, 0.39393939,
0.4040404 , 0.41414141, 0.42424242, 0.43434343, 0.44444444,
0.45454545, 0.46464646, 0.47474747, 0.48484848, 0.49494949,
0.50505051, 0.51515152, 0.52525253, 0.53535354, 0.54545455,
0.55555556, 0.56565657, 0.57575758, 0.58585859, 0.5959596 ,
0.60606061, 0.61616162, 0.62626263, 0.63636364, 0.64646465,
0.65656566, 0.66666667, 0.67676768, 0.68686869, 0.6969697 ,
0.70707071, 0.71717172, 0.72727273, 0.73737374, 0.74747475,
0.75757576, 0.76767677, 0.77777778, 0.78787879, 0.7979798 ,
0.80808081, 0.81818182, 0.82828283, 0.83838384, 0.84848485,
0.85858586, 0.86868687, 0.87878788, 0.88888889, 0.8989899 ,
0.90909091, 0.91919192, 0.92929293, 0.93939394, 0.94949495,
0.95959596, 0.96969697, 0.97979798, 0.98989899, 1. ]), 'type': 'scatter'}, {'name': '$\\mu = 0$', 'mode': 'lines+markers', 'hoverinfo': 'x+y', 'y': array([-0.34691493, 2.17938364, 0.51809962, -0.5959291 , 0.23412758,
-2.07896695, 1.40091551, 0.24549119, -0.56185707, -0.34058195,
-0.61737475, 0.45612955, -0.52261175, -0.48896567, -1.04546019,
0.26031148, 0.8714819 , 0.66574976, -0.01393808, 1.33190518,
-0.93344882, 1.47167414, -0.44745963, -0.34761853, -0.8235109 ,
1.23946937, -0.57356471, 0.12733162, 1.18614807, 0.84700632,
0.24209963, 1.23406421, 0.16085798, -0.21386201, -0.4344829 ,
0.28582313, 0.96303331, 0.64243359, 0.80443922, -0.73621594,
-0.63861189, 0.19156248, 0.13184313, 0.00497728, 0.99412137,
-0.92522068, 0.42878841, -2.02951441, -1.28557997, -2.74433002,
-2.60336845, 0.20100076, 0.18442098, -0.04819198, 1.55876483,
0.2357085 , 0.43286067, -0.07853408, -1.10796824, 0.73222129,
-0.18911711, -0.46665695, 0.22134336, -0.34721588, -0.31997409,
0.22769666, -0.12279111, 0.39043892, -0.40059278, -1.47428438,
-0.25698252, -1.15126189, 0.98357977, 0.84970328, -1.98720117,
1.11064262, 1.44028829, -1.63808531, 0.98075371, -0.58109039,
-0.38653214, -0.82579092, -0.08508776, 0.15513385, 0.1818524 ,
-0.92253578, 2.0418283 , -1.02322954, 3.76191565, -0.89168133,
0.28982609, 0.12799418, 1.01013012, -0.24028744, -0.5506389 ,
-0.21764378, -1.42432799, 0.51234302, 0.12061286, 1.94768532]), 'x': array([ 0. , 0.01010101, 0.02020202, 0.03030303, 0.04040404,
0.05050505, 0.06060606, 0.07070707, 0.08080808, 0.09090909,
0.1010101 , 0.11111111, 0.12121212, 0.13131313, 0.14141414,
0.15151515, 0.16161616, 0.17171717, 0.18181818, 0.19191919,
0.2020202 , 0.21212121, 0.22222222, 0.23232323, 0.24242424,
0.25252525, 0.26262626, 0.27272727, 0.28282828, 0.29292929,
0.3030303 , 0.31313131, 0.32323232, 0.33333333, 0.34343434,
0.35353535, 0.36363636, 0.37373737, 0.38383838, 0.39393939,
0.4040404 , 0.41414141, 0.42424242, 0.43434343, 0.44444444,
0.45454545, 0.46464646, 0.47474747, 0.48484848, 0.49494949,
0.50505051, 0.51515152, 0.52525253, 0.53535354, 0.54545455,
0.55555556, 0.56565657, 0.57575758, 0.58585859, 0.5959596 ,
0.60606061, 0.61616162, 0.62626263, 0.63636364, 0.64646465,
0.65656566, 0.66666667, 0.67676768, 0.68686869, 0.6969697 ,
0.70707071, 0.71717172, 0.72727273, 0.73737374, 0.74747475,
0.75757576, 0.76767677, 0.77777778, 0.78787879, 0.7979798 ,
0.80808081, 0.81818182, 0.82828283, 0.83838384, 0.84848485,
0.85858586, 0.86868687, 0.87878788, 0.88888889, 0.8989899 ,
0.90909091, 0.91919192, 0.92929293, 0.93939394, 0.94949495,
0.95959596, 0.96969697, 0.97979798, 0.98989899, 1. ]), 'type': 'scatter'}, {'name': '$\\mu = -5$', 'mode': 'lines', 'hoverinfo': 'x+y', 'y': array([-4.48987402, -5.00299715, -5.27569342, -3.8223728 , -3.6502064 ,
-4.93187342, -3.5115968 , -7.6986876 , -3.76826013, -6.4241302 ,
-4.73376471, -5.04163874, -5.77634254, -6.26209807, -4.60243579,
-4.67358739, -5.14992072, -5.29748225, -5.06602165, -6.13815511,
-5.564722 , -5.63855557, -4.82780904, -6.33164629, -5.56346489,
-5.70394686, -4.330434 , -5.49693236, -4.84009953, -4.6317476 ,
-4.010567 , -7.48336095, -6.09522473, -4.89513925, -4.83467357,
-4.70691006, -6.33809177, -6.29332726, -2.96358237, -4.7275947 ,
-5.83294186, -5.9606109 , -5.58472397, -5.62344515, -4.02465416,
-3.02183737, -4.86414021, -4.39816038, -6.61453443, -5.61632089,
-5.61164201, -3.60435107, -4.93173037, -5.2246923 , -4.56053351,
-5.71917183, -5.71984894, -5.60812793, -5.16043173, -4.15756347,
-3.7194692 , -5.52792439, -3.62796425, -4.86906244, -3.40373272,
-5.84820953, -3.46684876, -5.16100917, -5.74868657, -5.0236031 ,
-3.56692692, -2.02221892, -6.80212881, -3.8346147 , -5.91834124,
-2.31426304, -6.77714362, -4.95939023, -5.32436323, -4.86277363,
-5.28515788, -4.56678228, -3.81648371, -4.407105 , -4.70197081,
-6.01753656, -6.82434989, -5.93431004, -4.24573797, -4.33361792,
-5.97240983, -4.21877573, -4.80431133, -4.50803704, -4.76269954,
-4.79829357, -4.33052356, -4.4148409 , -5.98019236, -3.94674547]), 'x': array([ 0. , 0.01010101, 0.02020202, 0.03030303, 0.04040404,
0.05050505, 0.06060606, 0.07070707, 0.08080808, 0.09090909,
0.1010101 , 0.11111111, 0.12121212, 0.13131313, 0.14141414,
0.15151515, 0.16161616, 0.17171717, 0.18181818, 0.19191919,
0.2020202 , 0.21212121, 0.22222222, 0.23232323, 0.24242424,
0.25252525, 0.26262626, 0.27272727, 0.28282828, 0.29292929,
0.3030303 , 0.31313131, 0.32323232, 0.33333333, 0.34343434,
0.35353535, 0.36363636, 0.37373737, 0.38383838, 0.39393939,
0.4040404 , 0.41414141, 0.42424242, 0.43434343, 0.44444444,
0.45454545, 0.46464646, 0.47474747, 0.48484848, 0.49494949,
0.50505051, 0.51515152, 0.52525253, 0.53535354, 0.54545455,
0.55555556, 0.56565657, 0.57575758, 0.58585859, 0.5959596 ,
0.60606061, 0.61616162, 0.62626263, 0.63636364, 0.64646465,
0.65656566, 0.66666667, 0.67676768, 0.68686869, 0.6969697 ,
0.70707071, 0.71717172, 0.72727273, 0.73737374, 0.74747475,
0.75757576, 0.76767677, 0.77777778, 0.78787879, 0.7979798 ,
0.80808081, 0.81818182, 0.82828283, 0.83838384, 0.84848485,
0.85858586, 0.86868687, 0.87878788, 0.88888889, 0.8989899 ,
0.90909091, 0.91919192, 0.92929293, 0.93939394, 0.94949495,
0.95959596, 0.96969697, 0.97979798, 0.98989899, 1. ]), 'type': 'scatter'}]}
###Markdown
Figure objects store data like a Python dictionary.
###Code
# (5) Send Figure object to Plotly and show plot in notebook
py.iplot(fig, filename='scatter-mode')
###Output
_____no_output_____
###Markdown
Can save a static image as well:
###Code
py.image.save_as(fig, filename='scatter-mode.png')
###Output
_____no_output_____
###Markdown
Histograms
###Code
# (1) Generate some random numbers
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
# (2.1) Create the first Histogram object
trace1 = go.Histogram(
x=x0,
histnorm='count',
name='control',
autobinx=False,
xbins=dict(
start=-3.2,
end=2.8,
size=0.2
),
marker=dict(
color='fuchsia',
line=dict(
color='grey',
width=0
)
),
opacity=0.75
)
# (2.2) Create the second Histogram object
trace2 = go.Histogram(
x=x1,
name='experimental',
autobinx=False,
xbins=dict(
start=-1.8,
end=4.2,
size=0.2
),
marker=dict(
color='rgb(255, 217, 102)'
),
opacity=0.75
)
# (3) Create Data object
data = [trace1, trace2]
# (4) Create Layout object
layout = go.Layout(
title='Sampled Results',
xaxis=dict(
title='Value'
),
yaxis=dict(
title='Count'
),
barmode='overlay',
bargap=0.25,
bargroupgap=0.3,
showlegend=True
)
fig = go.Figure(data=data, layout=layout)
# (5) Send Figure object to Plotly and show plot in notebook
py.iplot(fig, filename='histogram_example')
###Output
_____no_output_____
###Markdown
DistplotsSimilar to `seaborn.distplot`. Plot a histogram, kernel density or normal curve, and a rug plot all together.
###Code
from plotly.tools import FigureFactory as FF
# Add histogram data
x1 = np.random.randn(200)-2
x2 = np.random.randn(200)
x3 = np.random.randn(200)+2
x4 = np.random.randn(200)+4
# Group data together
hist_data = [x1, x2, x3, x4]
group_labels = ['Group 1', 'Group 2', 'Group 3', 'Group 4']
# Create distplot with custom bin_size
fig = FF.create_distplot(hist_data, group_labels, bin_size=.2)
# Plot!
py.iplot(fig, filename='Distplot with Multiple Datasets', \
validate=False)
###Output
_____no_output_____
###Markdown
2D Contour Plot
###Code
x = np.random.randn(1000)
y = np.random.randn(1000)
py.iplot([go.Histogram2dContour(x=x, y=y, \
contours=go.Contours(coloring='fill')), \
go.Scatter(x=x, y=y, mode='markers', \
marker=go.Marker(color='white', size=3, opacity=0.3))])
###Output
_____no_output_____
###Markdown
3D Surface PlotPlot the function: $f(x,y) = A \cos(\pi x y) e^{-(x^2+y^2)/2}$
###Code
# Define the function to be plotted
def fxy(x, y):
A = 1 # choose a maximum amplitude
return A*(np.cos(np.pi*x*y))**2 * np.exp(-(x**2+y**2)/2.)
# Choose length of square domain, make row and column vectors
L = 4
x = y = np.arange(-L/2., L/2., 0.1) # use a mesh spacing of 0.1
yt = y[:, np.newaxis] # (!) make column vector
# Get surface coordinates!
z = fxy(x, yt)
trace1 = go.Surface(
z=z, # link the fxy 2d numpy array
x=x, # link 1d numpy array of x coords
y=y # link 1d numpy array of y coords
)
# Package the trace dictionary into a data object
data = go.Data([trace1])
# Dictionary of style options for all axes
axis = dict(
showbackground=True, # (!) show axis background
backgroundcolor="rgb(204, 204, 204)", # set background color to grey
gridcolor="rgb(255, 255, 255)", # set grid line color
zerolinecolor="rgb(255, 255, 255)", # set zero grid line color
)
# Make a layout object
layout = go.Layout(
title='$f(x,y) = A \cos(\pi x y) e^{-(x^2+y^2)/2}$', # set plot title
scene=go.Scene( # (!) axes are part of a 'scene' in 3d plots
xaxis=go.XAxis(axis), # set x-axis style
yaxis=go.YAxis(axis), # set y-axis style
zaxis=go.ZAxis(axis) # set z-axis style
)
)
# Make a figure object
fig = go.Figure(data=data, layout=layout)
# (@) Send to Plotly and show in notebook
py.iplot(fig, filename='surface')
###Output
_____no_output_____
###Markdown
Matplotlib Conversion
###Code
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
n = 50
x, y, z, s, ew = np.random.rand(5, n)
c, ec = np.random.rand(2, n, 4)
area_scale, width_scale = 500, 5
fig, ax = plt.subplots()
sc = ax.scatter(x, y, c=c,
s=np.square(s)*area_scale,
edgecolor=ec,
linewidth=ew*width_scale)
ax.grid()
py.iplot_mpl(fig)
###Output
_____no_output_____ |
PartThree.ipynb | ###Markdown
Part 3: Exploring `scoresFull.csv` using `pandas on spark` and `Spark` Map-Reduce __To get started on Part 3, first we need to import below libraries.__1. _os_ - This module provides a portable way of using operating system dependent functionality. 2. _sys_ - This module provides access to System-specific parameters and functions3. _from pyspark.sql import SparkSession_ - This is the entry point to programming Spark with the Dataset and DataFrame API.4. _matplotlib.pyplot as plt_ - This is a collection of functions that make matplotlib work like MATLAB. Each pyplot function makes some change to a figure5. _pandas as pd_ - pandas is an open source data analysis library built on top of the Python programming language.6. _pyspark.pandas as ps_ - Spark now integrates a Pandas API so you can run Pandas on top of Spark. Basically this module help to run pandas fast with spark.7. _warnings_ - This module will help to show/hide warnings generates from inbuilt pyarrow module.__Below are the environment variables to map for windows machine to use pyspark.___os.environ['PYSPARK_PYTHON'] = sys.executable__os.environ['PYSPARK_DRIVER_PYTHON'] = sys.executable___Below code is to initialize spark session to keep our work saved withing that session.___spark = SparkSession.builder.master('local[*]').getOrCreate()_
###Code
#Import models
import os
import sys
os.environ['PYSPARK_PYTHON'] = sys.executable
os.environ['PYSPARK_DRIVER_PYTHON'] = sys.executable
from pyspark.sql import SparkSession
import matplotlib.pyplot as plt
import pandas as pd
import pyspark.pandas as ps
spark = SparkSession.builder.master('local[*]').getOrCreate()
import warnings
warnings.filterwarnings("ignore")
###Output
WARNING:root:'PYARROW_IGNORE_TIMEZONE' environment variable was not set. It is required to set this environment variable to '1' in both driver and executor sides if you use pyarrow>=2.0.0. pandas-on-Spark will set it for you but it does not work if there is a Spark context already launched.
###Markdown
Exploring `scoresFull.csv` with `pandas on spark` Let's read scoresFull csv using pandas on spark!
###Code
psdf_data = ps.read_csv('scoresFull.csv', sep=",")
psdf_data.head(2)
###Output
_____no_output_____
###Markdown
Using pandas-on-Spark to find the mean and standard deviation for the AQ1, AQ2, AQ3, AQ4, AFinal, HQ1, HQ2, HQ3, HQ4, and HFinal variables
###Code
psdf_mean = psdf_data.agg({'AQ1': ['mean', 'std'], 'AQ2': ['mean', 'std'], 'AQ3': ['mean', 'std'], \
'AQ4': ['mean', 'std'], 'AFinal': ['mean', 'std'], 'HQ1': ['mean', 'std'], \
'HQ2': ['mean', 'std'], 'HQ3': ['mean', 'std'], 'HQ4': ['mean', 'std'], \
'HFinal': ['mean', 'std'] \
})
psdf_mean
###Output
_____no_output_____
###Markdown
Let's repeat above task to get mean and std for the given variables for each season.
###Code
psdf_season_mean_std = psdf_data.groupby('season').agg({'AQ1': ['mean', 'std'], 'AQ2': ['mean', 'std'], 'AQ3': ['mean', 'std'], \
'AQ4': ['mean', 'std'], 'AFinal': ['mean', 'std'], 'HQ1': ['mean', 'std'], \
'HQ2': ['mean', 'std'], 'HQ3': ['mean', 'std'], 'HQ4': ['mean', 'std'], \
'HFinal': ['mean', 'std'] \
}).sort_index()
psdf_season_mean_std
# Using plots, we will observe the trend of means and std across the seasons from 2002 to 2014
figure, axis = plt.subplots(1, 2,figsize=(25, 10))
axis[0].plot(psdf_season_mean_std[('AQ1', 'mean')], label = "AQ1")
axis[0].plot(psdf_season_mean_std[('AQ2', 'mean')], label = "AQ2")
axis[0].plot(psdf_season_mean_std[('AQ3', 'mean')], label = "AQ3")
axis[0].plot(psdf_season_mean_std[('AQ4', 'mean')], label = "AQ4")
axis[0].plot(psdf_season_mean_std[('AFinal', 'mean')], label = "AFinal")
axis[0].plot(psdf_season_mean_std[('HQ1', 'mean')], label = "HQ1")
axis[0].plot(psdf_season_mean_std[('HQ2', 'mean')], label = "HQ2")
axis[0].plot(psdf_season_mean_std[('HQ3', 'mean')], label = "HQ3")
axis[0].plot(psdf_season_mean_std[('HQ4', 'mean')], label = "HQ4")
axis[0].plot(psdf_season_mean_std[('HFinal', 'mean')], label = "HFinal")
axis[0].set_title("Means Across Season")
axis[1].plot(psdf_season_mean_std[('AQ1', 'std')], label = "AQ1")
axis[1].plot(psdf_season_mean_std[('AQ2', 'std')], label = "AQ2")
axis[1].plot(psdf_season_mean_std[('AQ3', 'std')], label = "AQ3")
axis[1].plot(psdf_season_mean_std[('AQ4', 'std')], label = "AQ4")
axis[1].plot(psdf_season_mean_std[('AFinal', 'std')], label = "AFinal")
axis[1].plot(psdf_season_mean_std[('HQ1', 'std')], label = "HQ1")
axis[1].plot(psdf_season_mean_std[('HQ2', 'std')], label = "HQ2")
axis[1].plot(psdf_season_mean_std[('HQ3', 'std')], label = "HQ3")
axis[1].plot(psdf_season_mean_std[('HQ4', 'std')], label = "HQ4")
axis[1].plot(psdf_season_mean_std[('HFinal', 'std')], label = "HFinal")
axis[1].set_title("Standard Deviations Across Season")
plt.legend()
###Output
_____no_output_____
###Markdown
Based on above graphs,we observed that mean and standard deviation for AFinal and HFinal is always high across all the seasons. Except HQ2, all the means are little stable. For HQ2, it has highest mean and standard deviation and also having spikes across all the seasons. Let's see what the analysis from `Spark` says from below. Exploring `scoresFull.csv` with `Spark` Map-Reduce In this section, we'll be using the `scoresFull.csv` data set and using `Spark` map-reduce to find the mean and standard deviation for the quarter stats variables for each value of season. Then we'll graphically display the results. To split files by season to perform the MapReduce
###Code
#Split the data
list_rdd=[]
#Read csv file
df=pd.read_csv("scoresFull.csv")
#Split into many files by season
for i in range(2002,2015):
list_rdd.append(df[df["season"]==i])
#Transfer RDD files
Season_Rdd=spark.sparkContext.parallelize(list_rdd)
#Check if it is RDD
type(Season_Rdd)
###Output
_____no_output_____
###Markdown
Create mapreduce def to perform sum, count and sum of sqaure
###Code
#Create def for mean and variance computing
#Parameter is list of desired variables
def mean_std(var_l):
#Create dict for mean and std
mean_dic={}
std_dic={}
for i in var_l:
#MapReduce for sum
sum_total=Season_Rdd.map(lambda x: x[i].sum()).map(lambda x: (i,x)).reduceByKey(lambda x,y: x+y).collect()[0][1]
#MapReduce for count
count_total=Season_Rdd.map(lambda x: x[i].count()).map(lambda x: (i,x)).reduceByKey(lambda x,y: x+y).collect()[0][1]
#MapReduce for sum of sqaure
sum_sqt_total=Season_Rdd.flatMap(lambda x: x[i]).map(lambda x: (i,x**2)).reduceByKey(lambda x,y: x+y).collect()[0][1]
mean_dic[i]=sum_total/count_total
std_dic[i]= ((sum_sqt_total-count_total*(sum_total/count_total)**2)/(count_total-1))**0.5
return [mean_dic,std_dic]
###Output
_____no_output_____
###Markdown
Perform MapReduce to calculate mean and std
###Code
#Apply the list of variables to find mean and std
Report_Q=["AQ1", "HQ1","AQ2", "HQ2","AQ3", "HQ3","AQ4", "HQ4","AFinal","HFinal"]
Report_list=mean_std(Report_Q)
print("\nMean is:\n",Report_list[0],"\nSTD is:\n",Report_list[1])
###Output
Mean is:
{'AQ1': 3.9248055315471047, 'HQ1': 4.828867761452031, 'AQ2': 6.241428983002017, 'HQ2': 7.105157015269374, 'AQ3': 4.38692019590896, 'HQ3': 4.791126476519735, 'AQ4': 5.890233362143475, 'HQ4': 6.322961682512244, 'AFinal': 20.55718813022184, 'HFinal': 23.17401325266494}
STD is:
{'AQ1': 4.490700421089053, 'HQ1': 4.726903424009663, 'AQ2': 5.221593452957312, 'HQ2': 5.702788076137263, 'AQ3': 4.6327168250024915, 'HQ3': 4.755144845943296, 'AQ4': 5.278775371882614, 'HQ4': 5.417310283450347, 'AFinal': 10.195585841440774, 'HFinal': 10.40595174402417}
###Markdown
Using the information above, we can produce a line graph displaying the means for each quarter variable across the seasons.
###Code
dfs=spark.read.load("scoresFull.csv",format='csv',sep=",",inferSchema='true',header='true')
mean=dfs[['season','AQ1','AQ2','AQ3','AQ4','HQ1','HQ2','HQ3','HQ4']].groupby('season').avg().toPandas().sort_values(by = ["season"])
plt.plot(mean.season, mean["avg(AQ1)"], label = "AQ1")
plt.plot(mean.season, mean["avg(AQ2)"], label = "AQ2")
plt.plot(mean.season, mean["avg(AQ3)"], label = "AQ3")
plt.plot(mean.season, mean["avg(AQ4)"], label = "AQ4")
plt.plot(mean.season, mean["avg(HQ1)"], label = "HQ1")
plt.plot(mean.season, mean["avg(HQ2)"], label = "HQ2")
plt.plot(mean.season, mean["avg(HQ3)"], label = "HQ3")
plt.plot(mean.season, mean["avg(HQ4)"], label = "HQ4")
plt.legend(bbox_to_anchor = (1, 0.8))
plt.xlabel("Season")
plt.ylabel("Mean")
plt.title("Means for Quarter Variables Across Season")
###Output
_____no_output_____
###Markdown
Part 3: Exploring `scoresFull.csv` using `pandas on spark` and `Spark` Map-Reduce __To get started on Part 3, first we need to import below libraries.__1. _os_ - This module provides a portable way of using operating system dependent functionality. 2. _sys_ - This module provides access to System-specific parameters and functions3. _from pyspark.sql import SparkSession_ - This is the entry point to programming Spark with the Dataset and DataFrame API.4. _matplotlib.pyplot as plt_ - This is a collection of functions that make matplotlib work like MATLAB. Each pyplot function makes some change to a figure5. _pandas as pd_ - pandas is an open source data analysis library built on top of the Python programming language.6. _pyspark.pandas as ps_ - Spark now integrates a Pandas API so you can run Pandas on top of Spark. Basically this module help to run pandas fast with spark.7. _warnings_ - This module will help to show/hide warnings generates from inbuilt pyarrow module.__Below are the environment variables to map for windows machine to use pyspark.___os.environ['PYSPARK_PYTHON'] = sys.executable__os.environ['PYSPARK_DRIVER_PYTHON'] = sys.executable___Below code is to initialize spark session to keep our work saved withing that session.___spark = SparkSession.builder.master('local[*]').getOrCreate()_
###Code
#Import models
import os
import sys
os.environ['PYSPARK_PYTHON'] = sys.executable
os.environ['PYSPARK_DRIVER_PYTHON'] = sys.executable
from pyspark.sql import SparkSession
import matplotlib.pyplot as plt
import pandas as pd
import pyspark.pandas as ps
spark = SparkSession.builder.master('local[*]').getOrCreate()
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Exploring `scoresFull.csv` with `pandas on spark` Let's read scoresFull csv using pandas on spark!
###Code
psdf_data = ps.read_csv('scoresFull.csv', sep=",")
psdf_data.head(2)
###Output
_____no_output_____
###Markdown
Using pandas-on-Spark to find the mean and standard deviation for the AQ1, AQ2, AQ3, AQ4, AFinal, HQ1, HQ2, HQ3, HQ4, and HFinal variables
###Code
psdf_mean = psdf_data.agg({'AQ1': ['mean', 'std'], 'AQ2': ['mean', 'std'], 'AQ3': ['mean', 'std'], \
'AQ4': ['mean', 'std'], 'AFinal': ['mean', 'std'], 'HQ1': ['mean', 'std'], \
'HQ2': ['mean', 'std'], 'HQ3': ['mean', 'std'], 'HQ4': ['mean', 'std'], \
'HFinal': ['mean', 'std'] \
})
psdf_mean
###Output
_____no_output_____
###Markdown
Let's repeat above task to get mean and std for the given variables for each season.
###Code
psdf_season_mean_std = psdf_data.groupby('season').agg({'AQ1': ['mean', 'std'], 'AQ2': ['mean', 'std'], 'AQ3': ['mean', 'std'], \
'AQ4': ['mean', 'std'], 'AFinal': ['mean', 'std'], 'HQ1': ['mean', 'std'], \
'HQ2': ['mean', 'std'], 'HQ3': ['mean', 'std'], 'HQ4': ['mean', 'std'], \
'HFinal': ['mean', 'std'] \
}).sort_index()
psdf_season_mean_std
# Using plots, we will observe the trend of means and std across the seasons from 2002 to 2014
figure, axis = plt.subplots(1, 2,figsize=(25, 10))
axis[0].plot(psdf_season_mean_std[('AQ1', 'mean')], label = "AQ1")
axis[0].plot(psdf_season_mean_std[('AQ2', 'mean')], label = "AQ2")
axis[0].plot(psdf_season_mean_std[('AQ3', 'mean')], label = "AQ3")
axis[0].plot(psdf_season_mean_std[('AQ4', 'mean')], label = "AQ4")
axis[0].plot(psdf_season_mean_std[('AFinal', 'mean')], label = "AFinal")
axis[0].plot(psdf_season_mean_std[('HQ1', 'mean')], label = "HQ1")
axis[0].plot(psdf_season_mean_std[('HQ2', 'mean')], label = "HQ2")
axis[0].plot(psdf_season_mean_std[('HQ3', 'mean')], label = "HQ3")
axis[0].plot(psdf_season_mean_std[('HQ4', 'mean')], label = "HQ4")
axis[0].plot(psdf_season_mean_std[('HFinal', 'mean')], label = "HFinal")
axis[0].set_title("Means Across Season")
axis[1].plot(psdf_season_mean_std[('AQ1', 'std')], label = "AQ1")
axis[1].plot(psdf_season_mean_std[('AQ2', 'std')], label = "AQ2")
axis[1].plot(psdf_season_mean_std[('AQ3', 'std')], label = "AQ3")
axis[1].plot(psdf_season_mean_std[('AQ4', 'std')], label = "AQ4")
axis[1].plot(psdf_season_mean_std[('AFinal', 'std')], label = "AFinal")
axis[1].plot(psdf_season_mean_std[('HQ1', 'std')], label = "HQ1")
axis[1].plot(psdf_season_mean_std[('HQ2', 'std')], label = "HQ2")
axis[1].plot(psdf_season_mean_std[('HQ3', 'std')], label = "HQ3")
axis[1].plot(psdf_season_mean_std[('HQ4', 'std')], label = "HQ4")
axis[1].plot(psdf_season_mean_std[('HFinal', 'std')], label = "HFinal")
axis[1].set_title("Standard Deviations Across Season")
plt.legend()
###Output
_____no_output_____
###Markdown
Based on above graphs,we observed that mean and standard deviation for AFinal and HFinal is always high across all the seasons. Except HQ2, all the means are little stable. For HQ2, it has highest mean and standard deviation and also having spikes across all the seasons. Let's see what the analysis from `Spark` says from below. Exploring `scoresFull.csv` with `Spark` Map-Reduce In this section, we'll be using the `scoresFull.csv` data set and using `Spark` map-reduce to find the mean and standard deviation for the quarter stats variables for each value of season. Then we'll graphically display the results. To split files by season to perform the MapReduce
###Code
#Split the data
list_rdd=[]
#Read csv file
df=pd.read_csv("scoresFull.csv")
#Split into many files by season
for i in range(2002,2015):
list_rdd.append(df[df["season"]==i])
#Transfer RDD files
Season_Rdd=spark.sparkContext.parallelize(list_rdd)
#Check if it is RDD
type(Season_Rdd)
###Output
_____no_output_____
###Markdown
Create mapreduce def to perform sum, count and sum of sqaure
###Code
#Create def for mean and variance computing
#Parameter is list of desired variables
def mean_std(var_l):
#Create dict for mean and std
mean_dic={}
std_dic={}
for i in var_l:
#MapReduce for sum
sum_total=Season_Rdd.map(lambda x: x[i].sum()).map(lambda x: (i,x)).reduceByKey(lambda x,y: x+y).collect()[0][1]
#MapReduce for count
count_total=Season_Rdd.map(lambda x: x[i].count()).map(lambda x: (i,x)).reduceByKey(lambda x,y: x+y).collect()[0][1]
#MapReduce for sum of sqaure
sum_sqt_total=Season_Rdd.flatMap(lambda x: x[i]).map(lambda x: (i,x**2)).reduceByKey(lambda x,y: x+y).collect()[0][1]
mean_dic[i]=sum_total/count_total
std_dic[i]= ((sum_sqt_total-count_total*(sum_total/count_total)**2)/(count_total-1))**0.5
return [mean_dic,std_dic]
###Output
_____no_output_____
###Markdown
Perform MapReduce to calculate mean and std
###Code
#Apply the list of variables to find mean and std
Report_Q=["AQ1", "HQ1","AQ2", "HQ2","AQ3", "HQ3","AQ4", "HQ4","AFinal","HFinal"]
Report_list=mean_std(Report_Q)
print("\nMean is:\n",Report_list[0],"\nSTD is:\n",Report_list[1])
###Output
Mean is:
{'AQ1': 3.9248055315471047, 'HQ1': 4.828867761452031, 'AQ2': 6.241428983002017, 'HQ2': 7.105157015269374, 'AQ3': 4.38692019590896, 'HQ3': 4.791126476519735, 'AQ4': 5.890233362143475, 'HQ4': 6.322961682512244, 'AFinal': 20.55718813022184, 'HFinal': 23.17401325266494}
STD is:
{'AQ1': 4.490700421089053, 'HQ1': 4.726903424009663, 'AQ2': 5.221593452957312, 'HQ2': 5.702788076137263, 'AQ3': 4.6327168250024915, 'HQ3': 4.755144845943296, 'AQ4': 5.278775371882614, 'HQ4': 5.417310283450347, 'AFinal': 10.195585841440774, 'HFinal': 10.40595174402417}
###Markdown
Using the information above, we can produce a line graph displaying the means for each quarter variable across the seasons.
###Code
dfs=spark.read.load("scoresFull.csv",format='csv',sep=",",inferSchema='true',header='true')
mean=dfs[['season','AQ1','AQ2','AQ3','AQ4','HQ1','HQ2','HQ3','HQ4']].groupby('season').avg().toPandas().sort_values(by = ["season"])
plt.plot(mean.season, mean["avg(AQ1)"], label = "AQ1")
plt.plot(mean.season, mean["avg(AQ2)"], label = "AQ2")
plt.plot(mean.season, mean["avg(AQ3)"], label = "AQ3")
plt.plot(mean.season, mean["avg(AQ4)"], label = "AQ4")
plt.plot(mean.season, mean["avg(HQ1)"], label = "HQ1")
plt.plot(mean.season, mean["avg(HQ2)"], label = "HQ2")
plt.plot(mean.season, mean["avg(HQ3)"], label = "HQ3")
plt.plot(mean.season, mean["avg(HQ4)"], label = "HQ4")
plt.legend(bbox_to_anchor = (1, 0.8))
plt.xlabel("Season")
plt.ylabel("Mean")
plt.title("Means for Quarter Variables Across Season")
###Output
_____no_output_____ |
notebook/HomeMotorAutomation.ipynb | ###Markdown
Generate Dataset from Log File
###Code
import os
import pandas as pd
from matplotlib import pyplot as plt
from scipy import stats
import numpy as np
LOG_FILE_DIR = '/home/vinaykudari/Documents/code/HomeMotorAutomation/data/'
FILE_NAME = 'homebridge.log'
LOG_FILE_PATH = os.path.join(LOG_FILE_DIR, FILE_NAME)
FROM_DATE = '2020-02-22'
def clean_log_file(log):
log = [line.replace('\x1b[37m', '').
replace('\x1b[39m \x1b[36m[Motor]\x1b[39m', ' : ').
replace('\n', '').
replace('\'','').
replace('.', '').split(" : ")
for line in log if 'turned' in line]
return log
def process_log(cleaned_log, from_date):
log_df = pd.DataFrame(log_cleaned, columns=['date', 'status'])
log_df['datetime'] = pd.to_datetime(log_df.date, format="[%m/%d/%Y, %I:%M:%S %p]")
log_df['date'] = log_df.datetime.apply(lambda x : x.date())
log_df['date'] = pd.to_datetime(log_df.date, format="%Y-%m-%d")
processed_log = log_df[log_df['date'] > from_date]
return processed_log
def transform_log(processed_log):
log_night = processed_log.groupby(['date', 'status']).apply(max).reset_index(drop=True)
log_morning = processed_log.groupby(['date', 'status']).apply(min).reset_index(drop=True)
df_night = log_night.pivot(index='date', columns='status', values='datetime').reset_index()
df_morning = log_morning.pivot(index='date', columns='status', values='datetime').reset_index()
df = pd.concat([df_night, df_morning]).sort_values(by='date').reset_index(drop=True)
df.columns = ['date', 'turned_off_time', 'turned_on_time']
df['day_of_week'] = df['date'].dt.day_name()
df['time_taken'] = df['turned_off_time'] - df['turned_on_time']
df['turned_off_time'] = df['turned_off_time'].dt.time
df['turned_on_time'] = df['turned_on_time'].dt.time
return df[['date', 'day_of_week', 'turned_on_time', 'turned_off_time', 'time_taken']]
log = open(LOG_FILE_PATH).readlines()
log_cleaned = clean_log_file(log)
processed_log = process_log(log_cleaned, FROM_DATE)
motor_data = transform_log(processed_log)
motor_data = motor_data.set_index('date')
motor_data.head()
###Output
_____no_output_____
###Markdown
Average Turn on Time
###Code
motor_data_avg = motor_data[['time_taken']]
motor_data_avg = motor_data_avg.groupby(by='date').agg(sum)
motor_data_avg.describe()
# Filtering outliers
motor_data_avg = motor_data_avg[
(motor_data_grouped_per_day.time_taken < '01:00:00') &
(motor_data_grouped_per_day.time_taken > '00:00:00')
]
motor_data_avg.loc[:, 'time_taken'] = motor_data_avg.time_taken.dt.seconds//60
motor_data_avg.describe()
plt.figure(figsize=(18,5))
plt.plot(
motor_data_avg.index,
motor_data_avg.time_taken
)
plt.xticks(motor_data_avg.index, rotation=90)
plt.show()
motor_data_avg['moving_average'] = motor_data_avg.rolling(window=10).mean()
motor_data_avg['2020-02-23': '2020-03-23'].describe()
motor_data_avg['2020-03-23': '2020-04-23'].describe()
motor_data_avg['2020-04-23': '2020-05-23'].describe()
# My Observations
# 2020-06-07
#1 Average time the motor is turned on is increasing
#2 Whenever the motor running time is higher then the next day its running time is lower and viceversa
# Average by weekday
motor_data_avg_weekday = motor_data[['day_of_week', 'time_taken']]
motor_data_avg_weekday.loc[:, 'time_taken'] = motor_data_avg_weekday.time_taken.dt.seconds//60
motor_data_avg_weekday = motor_data_avg_weekday.groupby(by=['day_of_week']).mean()
cats = ['Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday', 'Monday']
motor_data_avg_weekday = motor_data_avg_weekday.groupby(['day_of_week']).sum().reindex(cats)
plt.figure(figsize=(5,3))
plt.plot(
motor_data_avg_weekday.index,
motor_data_avg_weekday.time_taken
)
plt.xticks(motor_data_avg_weekday.index, rotation=90)
plt.show()
# My Observations
# 2020-06-07
#1 Similar trend observed with respect to steep low and highs
# Questions
#1 Do we have any specific water related activities with respect to weekday?
#A
###Output
_____no_output_____ |
01_Introduction to Data Science in Python/Week_1.ipynb | ###Markdown
---_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._--- The Python Programming Language: Functions `add_numbers` is a function that takes two numbers and adds them together.
###Code
def add_numbers(x, y):
return x + y
add_numbers(1, 2)
###Output
_____no_output_____
###Markdown
`add_numbers` updated to take an optional 3rd parameter. Using `print` allows printing of multiple expressions within a single cell.
###Code
def add_numbers(x,y,z=None):
if (z==None):
return x+y
else:
return x+y+z
print(add_numbers(1, 2))
print(add_numbers(1, 2, 3))
###Output
3
6
###Markdown
`add_numbers` updated to take an optional flag parameter.
###Code
def add_numbers(x, y, z=None, flag=False):
if (flag):
print('Flag is true!')
if (z==None):
return x + y
else:
return x + y + z
print(add_numbers(1, 2, flag=True))
###Output
Flag is true!
3
###Markdown
Assign function `add_numbers` to variable `a`.
###Code
def add_numbers(x,y):
return x+y
a = add_numbers
a(1,2)
###Output
_____no_output_____
###Markdown
The Python Programming Language: Types and Sequences Use `type` to return the object's type.
###Code
type('This is a string')
type(None)
type(1)
type(1.0)
type(add_numbers)
###Output
_____no_output_____
###Markdown
Tuples are an immutable data structure (cannot be altered).
###Code
x = (1, 'a', 2, 'b')
type(x)
###Output
_____no_output_____
###Markdown
Lists are a mutable data structure.
###Code
x = [1, 'a', 2, 'b']
type(x)
###Output
_____no_output_____
###Markdown
Use `append` to append an object to a list.
###Code
x.append(3.3)
print(x)
###Output
[1, 'a', 2, 'b', 3.3]
###Markdown
This is an example of how to loop through each item in the list.
###Code
for item in x:
print(item)
###Output
1
a
2
b
3.3
###Markdown
Or using the indexing operator:
###Code
i=0
while( i != len(x) ):
print(x[i])
i = i + 1
###Output
1
a
2
b
3.3
###Markdown
Use `+` to concatenate lists.
###Code
[1,2] + [3,4]
###Output
_____no_output_____
###Markdown
Use `*` to repeat lists.
###Code
[1]*3
###Output
_____no_output_____
###Markdown
Use the `in` operator to check if something is inside a list.
###Code
1 in [1, 2, 3]
###Output
_____no_output_____
###Markdown
Now let's look at strings. Use bracket notation to slice a string.
###Code
x = 'This is a string'
print(x[0]) #first character
print(x[0:1]) #first character, but we have explicitly set the end character
print(x[0:2]) #first two characters
###Output
T
T
Th
###Markdown
This will return the last element of the string.
###Code
x[-1]
###Output
_____no_output_____
###Markdown
This will return the slice starting from the 4th element from the end and stopping before the 2nd element from the end.
###Code
x[-4:-2]
###Output
_____no_output_____
###Markdown
This is a slice from the beginning of the string and stopping before the 3rd element.
###Code
x[:3]
###Output
_____no_output_____
###Markdown
And this is a slice starting from the 4th element of the string and going all the way to the end.
###Code
x[3:]
firstname = 'Christopher'
lastname = 'Brooks'
print(firstname + ' ' + lastname)
print(firstname*3)
print('Chris' in firstname)
###Output
Christopher Brooks
ChristopherChristopherChristopher
True
###Markdown
`split` returns a list of all the words in a string, or a list split on a specific character.
###Code
firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list
lastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list
print(firstname)
print(lastname)
###Output
Christopher
Brooks
###Markdown
Make sure you convert objects to strings before concatenating.
###Code
'Chris' + 2
'Chris' + str(2)
###Output
_____no_output_____
###Markdown
Dictionaries associate keys with values.
###Code
x = {'Christopher Brooks': '[email protected]', 'Bill Gates': '[email protected]'}
x['Christopher Brooks'] # Retrieve a value by using the indexing operator
x['Kevyn Collins-Thompson'] = None
x['Kevyn Collins-Thompson']
###Output
_____no_output_____
###Markdown
Iterate over all of the keys:
###Code
for name in x:
print(x[name])
###Output
_____no_output_____
###Markdown
Iterate over all of the values:
###Code
for email in x.values():
print(email)
###Output
_____no_output_____
###Markdown
Iterate over all of the items in the list:
###Code
for name, email in x.items():
print(name)
print(email)
###Output
_____no_output_____
###Markdown
You can unpack a sequence into different variables:
###Code
x = ('Christopher', 'Brooks', '[email protected]')
fname, lname, email = x
fname
lname
###Output
_____no_output_____
###Markdown
Make sure the number of values you are unpacking matches the number of variables being assigned.
###Code
x = ('Christopher', 'Brooks', '[email protected]', 'Ann Arbor')
fname, lname, email = x
###Output
_____no_output_____
###Markdown
The Python Programming Language: More on Strings
###Code
print('Chris' + 2)
print('Chris' + str(2))
###Output
_____no_output_____
###Markdown
Python has a built in method for convenient string formatting.
###Code
sales_record = {
'price': 3.24,
'num_items': 4,
'person': 'Chris'}
sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'
print(sales_statement.format(sales_record['person'],
sales_record['num_items'],
sales_record['price'],
sales_record['num_items']*sales_record['price']))
###Output
_____no_output_____
###Markdown
Reading and Writing CSV files Let's import our datafile mpg.csv, which contains fuel economy data for 234 cars.* mpg : miles per gallon* class : car classification* cty : city mpg* cyl : of cylinders* displ : engine displacement in liters* drv : f = front-wheel drive, r = rear wheel drive, 4 = 4wd* fl : fuel (e = ethanol E85, d = diesel, r = regular, p = premium, c = CNG)* hwy : highway mpg* manufacturer : automobile manufacturer* model : model of car* trans : type of transmission* year : model year
###Code
import csv
%precision 2
with open('mpg.csv') as csvfile:
mpg = list(csv.DictReader(csvfile))
mpg[:3] # The first three dictionaries in our list.
###Output
_____no_output_____
###Markdown
`csv.Dictreader` has read in each row of our csv file as a dictionary. `len` shows that our list is comprised of 234 dictionaries.
###Code
len(mpg)
###Output
_____no_output_____
###Markdown
`keys` gives us the column names of our csv.
###Code
mpg[0].keys()
###Output
_____no_output_____
###Markdown
This is how to find the average cty fuel economy across all cars. All values in the dictionaries are strings, so we need to convert to float.
###Code
sum(float(d['cty']) for d in mpg) / len(mpg)
###Output
_____no_output_____
###Markdown
Similarly this is how to find the average hwy fuel economy across all cars.
###Code
sum(float(d['hwy']) for d in mpg) / len(mpg)
###Output
_____no_output_____
###Markdown
Use `set` to return the unique values for the number of cylinders the cars in our dataset have.
###Code
cylinders = set(d['cyl'] for d in mpg)
cylinders
###Output
_____no_output_____
###Markdown
Here's a more complex example where we are grouping the cars by number of cylinder, and finding the average cty mpg for each group.
###Code
CtyMpgByCyl = []
for c in cylinders: # iterate over all the cylinder levels
summpg = 0
cyltypecount = 0
for d in mpg: # iterate over all dictionaries
if d['cyl'] == c: # if the cylinder level type matches,
summpg += float(d['cty']) # add the cty mpg
cyltypecount += 1 # increment the count
CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')
CtyMpgByCyl.sort(key=lambda x: x[0])
CtyMpgByCyl
###Output
_____no_output_____
###Markdown
Use `set` to return the unique values for the class types in our dataset.
###Code
vehicleclass = set(d['class'] for d in mpg) # what are the class types
vehicleclass
###Output
_____no_output_____
###Markdown
And here's an example of how to find the average hwy mpg for each class of vehicle in our dataset.
###Code
HwyMpgByClass = []
for t in vehicleclass: # iterate over all the vehicle classes
summpg = 0
vclasscount = 0
for d in mpg: # iterate over all dictionaries
if d['class'] == t: # if the cylinder amount type matches,
summpg += float(d['hwy']) # add the hwy mpg
vclasscount += 1 # increment the count
HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')
HwyMpgByClass.sort(key=lambda x: x[1])
HwyMpgByClass
###Output
_____no_output_____
###Markdown
The Python Programming Language: Dates and Times
###Code
import datetime as dt
import time as tm
###Output
_____no_output_____
###Markdown
`time` returns the current time in seconds since the Epoch. (January 1st, 1970)
###Code
tm.time()
###Output
_____no_output_____
###Markdown
Convert the timestamp to datetime.
###Code
dtnow = dt.datetime.fromtimestamp(tm.time())
dtnow
###Output
_____no_output_____
###Markdown
Handy datetime attributes:
###Code
dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime
###Output
_____no_output_____
###Markdown
`timedelta` is a duration expressing the difference between two dates.
###Code
delta = dt.timedelta(days = 100) # create a timedelta of 100 days
delta
###Output
_____no_output_____
###Markdown
`date.today` returns the current local date.
###Code
today = dt.date.today()
today - delta # the date 100 days ago
today > today-delta # compare dates
###Output
_____no_output_____
###Markdown
The Python Programming Language: Objects and map() An example of a class in python:
###Code
class Person:
department = 'School of Information' #a class variable
def set_name(self, new_name): #a method
self.name = new_name
def set_location(self, new_location):
self.location = new_location
person = Person()
person.set_name('Christopher Brooks')
person.set_location('Ann Arbor, MI, USA')
print('{} live in {} and works in the department {}'.format(person.name, person.location, person.department))
###Output
_____no_output_____
###Markdown
Here's an example of mapping the `min` function between two lists.
###Code
store1 = [10.00, 11.00, 12.34, 2.34]
store2 = [9.00, 11.10, 12.34, 2.01]
cheapest = map(min, store1, store2)
cheapest
###Output
_____no_output_____
###Markdown
Now let's iterate through the map object to see the values.
###Code
for item in cheapest:
print(item)
###Output
_____no_output_____
###Markdown
The Python Programming Language: Lambda and List Comprehensions Here's an example of lambda that takes in three parameters and adds the first two.
###Code
my_function = lambda a, b, c : a + b
my_function(1, 2, 3)
###Output
_____no_output_____
###Markdown
Let's iterate from 0 to 999 and return the even numbers.
###Code
my_list = []
for number in range(0, 1000):
if number % 2 == 0:
my_list.append(number)
my_list
###Output
_____no_output_____
###Markdown
Now the same thing but with list comprehension.
###Code
my_list = [number for number in range(0,1000) if number % 2 == 0]
my_list
###Output
_____no_output_____
###Markdown
The Python Programming Language: Numerical Python (NumPy)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Creating Arrays Create a list and convert it to a numpy array
###Code
mylist = [1, 2, 3]
x = np.array(mylist)
x
###Output
_____no_output_____
###Markdown
Or just pass in a list directly
###Code
y = np.array([4, 5, 6])
y
###Output
_____no_output_____
###Markdown
Pass in a list of lists to create a multidimensional array.
###Code
m = np.array([[7, 8, 9], [10, 11, 12]])
m
###Output
_____no_output_____
###Markdown
Use the shape method to find the dimensions of the array. (rows, columns)
###Code
m.shape
###Output
_____no_output_____
###Markdown
`arange` returns evenly spaced values within a given interval.
###Code
n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30
n
###Output
_____no_output_____
###Markdown
`reshape` returns an array with the same data with a new shape.
###Code
n = n.reshape(3, 5) # reshape array to be 3x5
n
###Output
_____no_output_____
###Markdown
`linspace` returns evenly spaced numbers over a specified interval.
###Code
o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4
o
###Output
_____no_output_____
###Markdown
`resize` changes the shape and size of array in-place.
###Code
o.resize(3, 3)
o
###Output
_____no_output_____
###Markdown
`ones` returns a new array of given shape and type, filled with ones.
###Code
np.ones((3, 2))
###Output
_____no_output_____
###Markdown
`zeros` returns a new array of given shape and type, filled with zeros.
###Code
np.zeros((2, 3))
###Output
_____no_output_____
###Markdown
`eye` returns a 2-D array with ones on the diagonal and zeros elsewhere.
###Code
np.eye(3)
###Output
_____no_output_____
###Markdown
`diag` extracts a diagonal or constructs a diagonal array.
###Code
np.diag(y)
###Output
_____no_output_____
###Markdown
Create an array using repeating list (or see `np.tile`)
###Code
np.array([1, 2, 3] * 3)
###Output
_____no_output_____
###Markdown
Repeat elements of an array using `repeat`.
###Code
np.repeat([1, 2, 3], 3)
###Output
_____no_output_____
###Markdown
Combining Arrays
###Code
p = np.ones([2, 3], int)
p
###Output
_____no_output_____
###Markdown
Use `vstack` to stack arrays in sequence vertically (row wise).
###Code
np.vstack([p, 2*p])
###Output
_____no_output_____
###Markdown
Use `hstack` to stack arrays in sequence horizontally (column wise).
###Code
np.hstack([p, 2*p])
###Output
_____no_output_____
###Markdown
Operations Use `+`, `-`, `*`, `/` and `**` to perform element wise addition, subtraction, multiplication, division and power.
###Code
print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9]
print(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3]
print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18]
print(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5]
print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9]
###Output
_____no_output_____
###Markdown
**Dot Product:** $ \begin{bmatrix}x_1 \ x_2 \ x_3\end{bmatrix}\cdot\begin{bmatrix}y_1 \\ y_2 \\ y_3\end{bmatrix}= x_1 y_1 + x_2 y_2 + x_3 y_3$
###Code
x.dot(y) # dot product 1*4 + 2*5 + 3*6
z = np.array([y, y**2])
print(len(z)) # number of rows of array
###Output
_____no_output_____
###Markdown
Let's look at transposing arrays. Transposing permutes the dimensions of the array.
###Code
z = np.array([y, y**2])
z
###Output
_____no_output_____
###Markdown
The shape of array `z` is `(2,3)` before transposing.
###Code
z.shape
###Output
_____no_output_____
###Markdown
Use `.T` to get the transpose.
###Code
z.T
###Output
_____no_output_____
###Markdown
The number of rows has swapped with the number of columns.
###Code
z.T.shape
###Output
_____no_output_____
###Markdown
Use `.dtype` to see the data type of the elements in the array.
###Code
z.dtype
###Output
_____no_output_____
###Markdown
Use `.astype` to cast to a specific type.
###Code
z = z.astype('f')
z.dtype
###Output
_____no_output_____
###Markdown
Math Functions Numpy has many built in math functions that can be performed on arrays.
###Code
a = np.array([-4, -2, 1, 3, 5])
a.sum()
a.max()
a.min()
a.mean()
a.std()
###Output
_____no_output_____
###Markdown
`argmax` and `argmin` return the index of the maximum and minimum values in the array.
###Code
a.argmax()
a.argmin()
###Output
_____no_output_____
###Markdown
Indexing / Slicing
###Code
s = np.arange(13)**2
s
###Output
_____no_output_____
###Markdown
Use bracket notation to get the value at a specific index. Remember that indexing starts at 0.
###Code
s[0], s[4], s[-1]
###Output
_____no_output_____
###Markdown
Use `:` to indicate a range. `array[start:stop]`Leaving `start` or `stop` empty will default to the beginning/end of the array.
###Code
s[1:5]
###Output
_____no_output_____
###Markdown
Use negatives to count from the back.
###Code
s[-4:]
###Output
_____no_output_____
###Markdown
A second `:` can be used to indicate step-size. `array[start:stop:stepsize]`Here we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached.
###Code
s[-5::-2]
###Output
_____no_output_____
###Markdown
Let's look at a multidimensional array.
###Code
r = np.arange(36)
r.resize((6, 6))
r
###Output
_____no_output_____
###Markdown
Use bracket notation to slice: `array[row, column]`
###Code
r[2, 2]
###Output
_____no_output_____
###Markdown
And use : to select a range of rows or columns
###Code
r[3, 3:6]
###Output
_____no_output_____
###Markdown
Here we are selecting all the rows up to (and not including) row 2, and all the columns up to (and not including) the last column.
###Code
r[:2, :-1]
###Output
_____no_output_____
###Markdown
This is a slice of the last row, and only every other element.
###Code
r[-1, ::2]
###Output
_____no_output_____
###Markdown
We can also perform conditional indexing. Here we are selecting values from the array that are greater than 30. (Also see `np.where`)
###Code
r[r > 30]
###Output
_____no_output_____
###Markdown
Here we are assigning all values in the array that are greater than 30 to the value of 30.
###Code
r[r > 30] = 30
r
###Output
_____no_output_____
###Markdown
Copying Data Be careful with copying and modifying arrays in NumPy!`r2` is a slice of `r`
###Code
r2 = r[:3,:3]
r2
###Output
_____no_output_____
###Markdown
Set this slice's values to zero ([:] selects the entire array)
###Code
r2[:] = 0
r2
###Output
_____no_output_____
###Markdown
`r` has also been changed!
###Code
r
###Output
_____no_output_____
###Markdown
To avoid this, use `r.copy` to create a copy that will not affect the original array
###Code
r_copy = r.copy()
r_copy
###Output
_____no_output_____
###Markdown
Now when r_copy is modified, r will not be changed.
###Code
r_copy[:] = 10
print(r_copy, '\n')
print(r)
###Output
_____no_output_____
###Markdown
Iterating Over Arrays Let's create a new 4 by 3 array of random numbers 0-9.
###Code
test = np.random.randint(0, 10, (4,3))
test
###Output
_____no_output_____
###Markdown
Iterate by row:
###Code
for row in test:
print(row)
###Output
_____no_output_____
###Markdown
Iterate by index:
###Code
for i in range(len(test)):
print(test[i])
###Output
_____no_output_____
###Markdown
Iterate by row and index:
###Code
for i, row in enumerate(test):
print('row', i, 'is', row)
###Output
_____no_output_____
###Markdown
Use `zip` to iterate over multiple iterables.
###Code
test2 = test**2
test2
for i, j in zip(test, test2):
print(i,'+',j,'=',i+j)
###Output
_____no_output_____ |
Crawler/CRMOOC.ipynb | ###Markdown
基本爬取例子 爬取百度搜索结果
###Code
import requests
keyword = 'Python'
try:
kv = {'wd':keyword}
r = requests.get('http://www.baidu.com/s', params = kv)
r.raise_for_status()
r.encoding = r.apparent_encoding
print(len(r.text))
except:
print("Error")
###Output
_____no_output_____
###Markdown
更改头部信息,获取京东商品
###Code
import requests
url = 'https://www.aptagen.com/apta-index/'
try:
kv={'user-agent':'Mozilla/5.0'}
r=requests.get(url,headers=kv)
r.raise_for_status()
r.encoding=r.apparent_encoding
print('爬取成功')
except:
print("爬取失败")
###Output
爬取成功
###Markdown
保存图片
###Code
import requests, os
url = 'https://www.aptagen.com/wp-content/uploads/358.png'
root = '/Users/jiamingxu/Downloads/'
path = root + url.split('/')[-1]
try:
if not os.path.exists(root):
os.mkdir(root)
if not os.path.exists(path):
r = requests.get(url, headers={'user-agent':'Chrome/16.0'})
with open(path, 'wb') as f:
f.write(r.content)
f.close()
print('File saved')
else:
print('File exists')
except:
print('Failed')
r.content
###Output
_____no_output_____
###Markdown
查询ip归属地
###Code
import requests
url = 'https://m.ip138.com/iplookup.asp?ip='
kv={'user-agent':'Mozilla/5.0'}
r = requests.get(url + '202.204.80.112', headers=kv)
r.status_code
r.encoding = r.apparent_encoding
r.text[100:1000]
###Output
_____no_output_____
###Markdown
解析HTML页面 - Beautiful Soup
###Code
import requests
import requests
url = 'https://www.aptagen.com/apta-index/'
try:
kv={'user-agent':'Mozilla/5.0'}
r=requests.get(url,headers=kv)
r.raise_for_status()
r.encoding=r.apparent_encoding
demo = r.text
print('爬取成功')
except:
print("爬取失败")
from bs4 import BeautifulSoup
import re
soup = BeautifulSoup(demo, 'html.parser')
soup.a.parent.parent.name
tag = soup.a
tag.attrs['href']
type(tag)
soup.a.string
type(soup.p.string)
type(soup.p.string)
for i in soup.p.contents:
print(i)
soup.head.contents
soup.body.contents
len(soup.body.contents)
soup.body.contents[1]
soup.a.next_sibling.next_sibling
soup.a.previous_sibling.previous_sibling
for tag in soup(True):
print(tag.name)
for tag in soup(re.compile('p')):
print(tag.name)
for tag in soup('p', 'course'):
print(tag)
for tag in soup(id='link1'):
print(tag)
###Output
<a class="py1" href="http://www.icourse163.org/course/BIT-268001" id="link1">Basic Python</a>
###Markdown
中国大学排名爬取
###Code
import requests
from bs4 import BeautifulSoup
import bs4
def getHTMLText(url):
try:
kv={'user-agent':'Mozilla/5.0'}
r=requests.get(url, headers=kv, timeout = 30)
r.raise_for_status()
r.encoding=r.apparent_encoding
print('爬取成功')
return r.text
except:
return ""
def fillUnivList(ulist, html):
soup = BeautifulSoup(demo, 'html.parser')
for tr in soup.find('tbody').children:
if isinstance(tr, bs4.element.Tag):
tds = tr('td')
ulist.append([tds[0].string.split()[0],str(tds[1].find('a').string),float(tds[4].string)])
def printUnivList(ulist, num):
tplt = "{0:^10}\t{1:{3}^10}\t{2:^10}" # 1:{3}表示使用format里第三个变量
print(tplt.format('排名','学校名称','总分', chr(12288))) # chr(12288) = 中文空格
for i in range(num):
u = ulist[i]
print(tplt.format(u[0],u[1],u[2], chr(12288)))
#print("Suc" + str(num))
def main():
global uinfo
uinfo = []
url = 'https://www.shanghairanking.cn/rankings/bcur/2021'
html = getHTMLText(url)
fillUnivList(uinfo, html)
printUnivList(uinfo, 20) # first 20 universities
main()
###Output
爬取成功
排名 学校名称 总分
1 清华大学 969.2
2 北京大学 855.3
3 浙江大学 768.7
4 上海交通大学 723.4
5 南京大学 654.8
6 复旦大学 649.7
7 中国科学技术大学 577.0
8 华中科技大学 574.3
9 武汉大学 567.9
10 西安交通大学 537.9
11 哈尔滨工业大学 522.6
12 中山大学 519.3
13 北京师范大学 518.3
14 四川大学 516.6
15 北京航空航天大学 513.8
16 同济大学 508.3
17 东南大学 488.1
18 中国人民大学 487.8
19 北京理工大学 474.0
20 南开大学 465.3
###Markdown
正则表达式
###Code
# regular expression, regex, RE
import re
match = re.match(r'[1-9]\d{5}', '165000 JGD')
if match:
print(match.group(0))
ls = re.findall(r'[1-9]\d{5}', 'BIT2019209 JGD165000')
ls
re.split(r'[1-9]\d{5}', 'BIT201209 JGD165000')
re.split(r'[1-9]\d{5}', 'BIT201209 JGD165000', maxsplit=1)
for m in re.finditer(r'[1-9]\d{5}', 'BIT201209 JGD165000'):
if m:
print(m.group(0))
re.sub(r'[1-9]\d{5}', ' ZIPCODE', 'BIT201209 JGD165000')
###Output
_____no_output_____
###Markdown
面向对象用法:多次使用
###Code
pat = re.compile(r'[1-9]\d{5}')
rst = pat.search('BIT 165000 JDJ 100302')
rst.group(0)
rst.string
rst.re
rst.pos
rst.endpos
rst.start()
rst.end()
rst.span()
match = re.search(r'PY.*?N', 'PYANBNCNDN')
match.group(0)
###Output
_____no_output_____
###Markdown
淘宝商品比价
###Code
import requests
import re
def getHTMLText(url):
try:
r = requests.get(url, headers = {'user-agent':'Mozilla/5.0'}, timeout = 30)
r.raise_for_status
r.encoding = r.apparent_encoding
print('爬虫成功')
return r.text
except:
print( "ERRPR" )
def parsePage(ilt, html):
try:
plt = re.findall(r'\"\_blank\" title\=\"\"[\d\.]*"', html)
tlt = re.findall(r'\"\_blank\" title\=\" \".*?"', html)
for i in range(len(plt)):
price = eval(plt[i].split(':')[1])
title = eval(tlt[i].split(':')[1])
ilt.append([price, title])
except:
print("ParsePage Error")
def printGoodsList(ilt):
try:
tplt = "{:4}\t{:8}\t{:16}"
print(tplt.format('序号','价格','商品名称'))
count = 0
for g in ilt:
count = count + 1
print(tplt.format(count, g[0], g[1]))
except:
print("")
def main():
goods = 'iphone'
depth = 3
start_url = 'https://search.jd.com/Search?keyword=' + goods
infoList = []
for i in range(depth):
try:
url = start_url + '&page=' + str(i) + '&s=' + str(31*i)
html = getHTMLText(url)
parsePage(infoList, html)
except:
continue #print('main Error')
printGoodsList(infoList)
main()
###Output
爬虫成功
爬虫成功
爬虫成功
序号 价格 商品名称
###Markdown
由于淘宝现在查询价格需要登陆账号,遂这里使用京东进行比较。京东不同于淘宝,价格、商品名称隐藏在tag中,可能需要美味汤进行初步处理,再用re。
###Code
goods = 'iphone'
depth = 1
start_url = 'https://search.jd.com/Search?keyword=' + goods
infoList = []
for i in range(depth):
try:
url = start_url + '&page=' + str(i) + '&s=' + str(31*i)
html = getHTMLText(url)
parsePage(infoList, html)
except:
print('error')
html
###Output
_____no_output_____ |
utilities/video-analysis/notebooks/Yolo/yolov3/yolov3-http-icpu-onnx/local_test_icpu.ipynb | ###Markdown
Test the Docker Container Locally (Optional)Uploading the Docker container onto a Cloud-based container registry may be a long process and bandwidth-intensive. Thus, before upload, we can **optionally** test our image to see if it works fine. Run Docker ContainerThe code snippet below runs our Docker container by mapping the host PC's port 5001.> [!NOTE]> Execution of the below command may take several minutes.
###Code
import sys
sys.path.append('../../../common')
from env_variables import *
!docker run --name lvaExtension -p 8080:80 -d -i $containerImageName
###Output
_____no_output_____
###Markdown
Send Sample ImageNow call the server score endpoint by sending a sample image.> [!IMPORTANT]> Here the sample image size is and must be 416x416 pixels, as explained in the previous section. The result of the code below should be a Json file with inference results like the following.```{ "type": "entity", "entity": { "tag": { "value": "person", "confidence": 0.959613 }, "box": { "l": 0.692427, "t": 0.364723, "w": 0.084010, "h": 0.077655 } }},{ "type": "entity", "entity": { "tag": { "value": "vehicle", "confidence": 0.929751 }, "box": { "l": 0.521143, "t": 0.446333, "w": 0.166306, "h": 0.126898 } }}```
###Code
!curl https://lvamedia.blob.core.windows.net/public/people_in_cafeteria_416x416.jpg > "sample.jpg"
!curl -X POST http://127.0.0.1:8080/score -H "Content-Type: image/jpeg" --data-binary @sample.jpg
###Output
_____no_output_____
###Markdown
Stop Docker ContainerFinally, stop the running container and deallocate the resources.
###Code
!docker stop lvaExtension
!docker rm lvaExtension
###Output
_____no_output_____ |
fraud-detection2.ipynb | ###Markdown
SetupWe will be using TensorFlow and Keras. Let's begin:
###Code
import tensorflow as tf
from keras.layers import Input, Dense
from keras.models import Model, Sequential
from keras import regularizers
from keras.callbacks import EarlyStopping, ModelCheckpoint
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, accuracy_score, roc_auc_score, roc_curve
from sklearn.preprocessing import MinMaxScaler
from sklearn.manifold import TSNE
import os
%matplotlib inline
np.random.seed(0)
import keras
keras.__version__
###Output
_____no_output_____
###Markdown
Utility functions
###Code
### Utility Functions
## Plots
# Plot Feature Projection [credit: https://www.kaggle.com/shivamb/semi-supervised-classification-using-autoencoders]
def tsne_plot(x1, y1, name="graph.png"):
tsne = TSNE(n_components=2, random_state=0)
X_t = tsne.fit_transform(x1)
plt.figure(figsize=(12, 8))
plt.scatter(X_t[np.where(y1 == 0), 0], X_t[np.where(y1 == 0), 1], marker='o', color='g', linewidth='1', alpha=0.8, label='Non Fraud')
plt.scatter(X_t[np.where(y1 == 1), 0], X_t[np.where(y1 == 1), 1], marker='o', color='r', linewidth='1', alpha=0.8, label='Fraud')
plt.legend(loc='best');
plt.savefig(name);
plt.show();
# Plot Keras training history
def plot_loss(hist):
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
## Util methods copied from OCAN package due to failure to install as custom package [credit:https://github.com/PanpanZheng/OCAN]
def xavier_init(size): # initialize the weight-matrix W.
in_dim = size[0]
xavier_stddev = 1. / tf.sqrt(in_dim / 2.)
return tf.random_normal(shape=size, stddev=xavier_stddev)
def sample_shuffle_uspv(X):
n_samples = len(X)
s = np.arange(n_samples)
np.random.shuffle(s)
return np.array(X[s])
def pull_away_loss(g):
Nor = tf.norm(g, axis=1)
Nor_mat = tf.tile(tf.expand_dims(Nor, axis=1),
[1, tf.shape(g)[1]])
X = tf.divide(g, Nor_mat)
X_X = tf.square(tf.matmul(X, tf.transpose(X)))
mask = tf.subtract(tf.ones_like(X_X),
tf.diag(
tf.ones([tf.shape(X_X)[0]]))
)
pt_loss = tf.divide(tf.reduce_sum(tf.multiply(X_X, mask)),
tf.multiply(
tf.cast(tf.shape(X_X)[0], tf.float32),
tf.cast(tf.shape(X_X)[0]-1, tf.float32)))
return pt_loss
def one_hot(x, depth):
x_one_hot = np.zeros((len(x), depth), dtype=np.int32)
x = x.astype(int)
for i in range(x_one_hot.shape[0]):
x_one_hot[i, x[i]] = 1
return x_one_hot
def sample_Z(m, n): # generating the input for G.
return np.random.uniform(-1., 1., size=[m, n])
def draw_trend(D_real_prob, D_fake_prob, D_val_prob, fm_loss, f1):
fig = plt.figure()
fig.patch.set_facecolor('w')
# plt.subplot(311)
p1, = plt.plot(D_real_prob, "-g")
p2, = plt.plot(D_fake_prob, "--r")
p3, = plt.plot(D_val_prob, ":c")
plt.xlabel("# of epoch")
plt.ylabel("probability")
leg = plt.legend([p1, p2, p3], [r'$p(y|V_B)$', r'$p(y|\~{V})$', r'$p(y|V_M)$'], loc=1, bbox_to_anchor=(1, 1), borderaxespad=0.)
leg.draw_frame(False)
# plt.legend(frameon=False)
fig = plt.figure()
fig.patch.set_facecolor('w')
# plt.subplot(312)
p4, = plt.plot(fm_loss, "-b")
plt.xlabel("# of epoch")
plt.ylabel("feature matching loss")
# plt.legend([p4], ["d_real_prob", "d_fake_prob", "d_val_prob"], loc=1, bbox_to_anchor=(1, 1), borderaxespad=0.)
fig = plt.figure()
fig.patch.set_facecolor('w')
# plt.subplot(313)
p5, = plt.plot(f1, "-y")
plt.xlabel("# of epoch")
plt.ylabel("F1")
# plt.legend([p1, p2, p3, p4, p5], ["d_real_prob", "d_fake_prob", "d_val_prob", "fm_loss","f1"], loc=1, bbox_to_anchor=(1, 3.5), borderaxespad=0.)
plt.show()
## OCAN TF Training Utils
def generator(z):
G_h1 = tf.nn.relu(tf.matmul(z, G_W1) + G_b1)
G_logit = tf.nn.tanh(tf.matmul(G_h1, G_W2) + G_b2)
return G_logit
def discriminator(x):
D_h1 = tf.nn.relu(tf.matmul(x, D_W1) + D_b1)
D_h2 = tf.nn.relu(tf.matmul(D_h1, D_W2) + D_b2)
D_logit = tf.matmul(D_h2, D_W3) + D_b3
D_prob = tf.nn.softmax(D_logit)
return D_prob, D_logit, D_h2
# pre-train net for density estimation.
def discriminator_tar(x):
T_h1 = tf.nn.relu(tf.matmul(x, T_W1) + T_b1)
T_h2 = tf.nn.relu(tf.matmul(T_h1, T_W2) + T_b2)
T_logit = tf.matmul(T_h2, T_W3) + T_b3
T_prob = tf.nn.softmax(T_logit)
return T_prob, T_logit, T_h2
###Output
_____no_output_____
###Markdown
Loading the dataThe dataset can be downloaded from [Kaggle](https://www.kaggle.com/dalpozz/creditcardfraud). It contains data about credit card transactions that occurred during a period of two days, with 492 frauds out of 284,807 transactions.All variables in the dataset are numerical. The data has been transformed using PCA transformation(s) due to privacy reasons. The two features that haven't been changed are Time and Amount. Time contains the seconds elapsed between each transaction and the first transaction in the dataset.
###Code
pwd
raw_data = pd.read_csv("data/creditcardfraud.zip", compression='infer', header=0, sep=',', quotechar='"')
data, data_test = train_test_split(raw_data, test_size=0.25)
###Output
_____no_output_____
###Markdown
Exploration
###Code
data.head()
data.shape
###Output
_____no_output_____
###Markdown
31 columns, 2 of which are Time and Amount. The rest are output from the PCA transformation. Let's check for missing values:
###Code
data.isnull().values.any()
pd.value_counts(data['Class'])
LABELS = ["Normal", "Fraud"]
count_classes = pd.value_counts(data['Class'], sort = True)
count_classes.plot(kind = 'bar', rot=0)
plt.title("Transaction class distribution")
plt.xticks(range(2), LABELS)
plt.xlabel("Class")
plt.ylabel("Frequency");
###Output
_____no_output_____
###Markdown
We have a highly imbalanced dataset on our hands. Normal transactions overwhelm the fraudulent ones by a large margin. Let's look at the two types of transactions:
###Code
raw_data_sample = data[data['Class'] == 0].sample(1000).append(data[data['Class'] == 1]).sample(frac=1).reset_index(drop=True)
raw_data_x = raw_data_sample.drop(['Class'], axis = 1)
raw_data_x[['Time']]=MinMaxScaler().fit_transform(raw_data_x[['Time']])
raw_data_x[['Amount']]=MinMaxScaler().fit_transform(raw_data_x[['Amount']])
tsne_plot(raw_data_x, raw_data_sample["Class"].values, "raw.png")
###Output
_____no_output_____
###Markdown
You can see from above that using naive preprocessing, fraud transactions (in red) are mixed with genuine transactions (in green) with no clear distinctions.Now as an alternative, **we transform the Time field to time-of-day to account for intraday seasonality**, with the intuition that transactions that happen at certain time-of-day such as late night could be fraudulent. **The Amount field is transformed to log scale for normalization**, with the intuition that the scale of magnitute of a transaction could be a more relevant feature for fraud than linear amounts.
###Code
data.loc[:,"Time"] = data["Time"].apply(lambda x : x / 3600 % 24)
data.loc[:,'Amount'] = np.log(data['Amount']+1)
data_test.loc[:,"Time"] = data_test["Time"].apply(lambda x : x / 3600 % 24)
data_test.loc[:,'Amount'] = np.log(data_test['Amount']+1)
# data = data.drop(['Amount'], axis = 1)
print(data.shape)
data.head()
###Output
(213605, 31)
###Markdown
Visualize Preprocessed Transaction FeaturesSample another 1000 non-fraud transactions from the training set and plot with all fraudulent transactions in the training set. The T-SNE plot shows that after Time and Amount preprocessing, fraud transactions (in red) are adequately seperated from non-fraud transactions (in green), dispite the clusters being close to each other.
###Code
non_fraud = data[data['Class'] == 0].sample(1000)
fraud = data[data['Class'] == 1]
df = non_fraud.append(fraud).sample(frac=1).reset_index(drop=True)
X = df.drop(['Class'], axis = 1).values
Y = df["Class"].values
tsne_plot(X, Y, "original.png")
###Output
_____no_output_____
###Markdown
Extraction Latent Representation with AutoencodersWe train an autoencoder with 10k in-sample non-fraud transactions to extract latent representations (of 50 dimensions, in order to be consistent with the original paper).Notice that the encoder and decoder layers are not symetric, and that the hidden layer sizes are non-increasing. This is because the goal here is not to learn the identity function nor sparse coding, but rather the abstract representations of the transactions. As we shall see the learnt latent representations did a good job at clustering.
###Code
## input layer
input_layer = Input(shape=(X.shape[1],))
## encoding part
encoded = Dense(100, activation='tanh', activity_regularizer=regularizers.l1(10e-5))(input_layer)
encoded = Dense(50, activation='sigmoid')(encoded)
## decoding part
decoded = Dense(50, activation='tanh')(encoded)
## output layer
output_layer = Dense(X.shape[1], activation='relu')(decoded)
# Autoencoder model
autoencoder = Model(input_layer, output_layer)
autoencoder.compile(optimizer="adadelta", loss="mse")
# Min-max scaling
x = data.drop(["Class"], axis=1)
y = data["Class"].values
# x_scale = MinMaxScaler(feature_range=(-1, 1)).fit_transform(x)
x_norm, x_fraud = x.values[y == 0], x.values[y == 1]
###Output
_____no_output_____
###Markdown
Instead of training for a fixed number of epochs, this autoencoder is trained with early-stopping. The training stops when the validation losses fail to decrease for 20 consecutive epochs.
###Code
checkpointer = ModelCheckpoint(filepath='bestmodel.hdf5', verbose=0, save_best_only=True)
earlystopper = EarlyStopping(monitor='val_loss', mode='min', min_delta=0.005, patience=20, verbose=0,restore_best_weights=True)
x_norm_train_sample = x_norm[np.random.randint(x_norm.shape[0], size=10000),:]
hist = autoencoder.fit(x_norm_train_sample, x_norm_train_sample,
batch_size = 256, epochs = 400,
shuffle = True, validation_split = 0.05, verbose=0, callbacks=[checkpointer, earlystopper])
plot_loss(hist)
###Output
_____no_output_____
###Markdown
Extract latent representation by running transactions through the trained encoding layers of the autoencoder. We do so for another sample of 700 non-fraud transactions plus all fraud transactions from the training set. 700 is chosen to be consistent with the original paper.
###Code
hidden_representation = Sequential()
hidden_representation.add(autoencoder.layers[0])
hidden_representation.add(autoencoder.layers[1])
hidden_representation.add(autoencoder.layers[2])
norm_hid_rep = hidden_representation.predict(x_norm[np.random.randint(x_norm.shape[0], size=700),:])
fraud_hid_rep = hidden_representation.predict(x_fraud)
# norm_hid_rep = MinMaxScaler(feature_range=(-1, 1)).fit_transform(norm_hid_rep)
# fraud_hid_rep = MinMaxScaler(feature_range=(-1, 1)).fit_transform(fraud_hid_rep)
###Output
_____no_output_____
###Markdown
Visualize Latent Representations T-SNE plot with latent representations for the (700 non-fraud + fraud) samples. Notice how the clustering becomes even more pronounced using the latent representations of the transactions.
###Code
rep_x = np.append(norm_hid_rep, fraud_hid_rep, axis = 0)
y_n = np.zeros(norm_hid_rep.shape[0])
y_f = np.ones(fraud_hid_rep.shape[0])
rep_y = np.append(y_n, y_f)
tsne_plot(rep_x, rep_y, "latent_representation.png")
from IPython.display import display, Image, HTML
display(HTML("""<table align="center">
<tr ><td><b>Actual Representation (Before) </b></td><td><b>Latent Representation (Actual)</b></td></tr>
<tr><td><img src='original.png'></td><td>
<img src='latent_representation.png'></td></tr></table>"""))
###Output
_____no_output_____
###Markdown
Training of One-Class Adversarial Nets ClassifierThe One-Class Adversarial Nets (OCAN) proposed by [One-Class Adversarial Nets for Fraud Detection (Zheng et al., 2018)](https://arxiv.org/abs/1803.01798) is a state-of-the-art one-class classification model that can be trained using samples of only one class through a generative training process, hence a compelling candidate for fraud detection problems where positive labeled samples are rare and diverse in features.The following are tensorflow training code adopted from the original paper's repository. The model is trained for 200 epochs.
###Code
dim_input = norm_hid_rep.shape[1]
mb_size = 70
D_dim = [dim_input, 100, 50, 2]
G_dim = [50, 100, dim_input]
Z_dim = G_dim[0]
X_oc = tf.placeholder(tf.float32, shape=[None, dim_input])
Z = tf.placeholder(tf.float32, shape=[None, Z_dim])
X_tar = tf.placeholder(tf.float32, shape=[None, dim_input])
# define placeholders for labeled-data, unlabeled-data, noise-data and target-data.
X_oc = tf.placeholder(tf.float32, shape=[None, dim_input])
Z = tf.placeholder(tf.float32, shape=[None, Z_dim])
X_tar = tf.placeholder(tf.float32, shape=[None, dim_input])
# X_val = tf.placeholder(tf.float32, shape=[None, dim_input])
# declare weights and biases of discriminator.
D_W1 = tf.Variable(xavier_init([D_dim[0], D_dim[1]]))
D_b1 = tf.Variable(tf.zeros(shape=[D_dim[1]]))
D_W2 = tf.Variable(xavier_init([D_dim[1], D_dim[2]]))
D_b2 = tf.Variable(tf.zeros(shape=[D_dim[2]]))
D_W3 = tf.Variable(xavier_init([D_dim[2], D_dim[3]]))
D_b3 = tf.Variable(tf.zeros(shape=[D_dim[3]]))
theta_D = [D_W1, D_W2, D_W3, D_b1, D_b2, D_b3]
# declare weights and biases of generator.
G_W1 = tf.Variable(xavier_init([G_dim[0], G_dim[1]]))
G_b1 = tf.Variable(tf.zeros(shape=[G_dim[1]]))
G_W2 = tf.Variable(xavier_init([G_dim[1], G_dim[2]]))
G_b2 = tf.Variable(tf.zeros(shape=[G_dim[2]]))
theta_G = [G_W1, G_W2, G_b1, G_b2]
# declare weights and biases of pre-train net for density estimation.
T_W1 = tf.Variable(xavier_init([D_dim[0], D_dim[1]]))
T_b1 = tf.Variable(tf.zeros(shape=[D_dim[1]]))
T_W2 = tf.Variable(xavier_init([D_dim[1], D_dim[2]]))
T_b2 = tf.Variable(tf.zeros(shape=[D_dim[2]]))
T_W3 = tf.Variable(xavier_init([D_dim[2], D_dim[3]]))
T_b3 = tf.Variable(tf.zeros(shape=[D_dim[3]]))
theta_T = [T_W1, T_W2, T_W3, T_b1, T_b2, T_b3]
D_prob_real, D_logit_real, D_h2_real = discriminator(X_oc)
G_sample = generator(Z)
D_prob_gen, D_logit_gen, D_h2_gen = discriminator(G_sample)
D_prob_tar, D_logit_tar, D_h2_tar = discriminator_tar(X_tar)
D_prob_tar_gen, D_logit_tar_gen, D_h2_tar_gen = discriminator_tar(G_sample)
# D_prob_val, _, D_h1_val = discriminator(X_val)
# disc. loss
y_real= tf.placeholder(tf.int32, shape=[None, D_dim[3]])
y_gen = tf.placeholder(tf.int32, shape=[None, D_dim[3]])
D_loss_real = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=D_logit_real,labels=y_real))
D_loss_gen = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=D_logit_gen, labels=y_gen))
ent_real_loss = -tf.reduce_mean(
tf.reduce_sum(
tf.multiply(D_prob_real, tf.log(D_prob_real)), 1
)
)
ent_gen_loss = -tf.reduce_mean(
tf.reduce_sum(
tf.multiply(D_prob_gen, tf.log(D_prob_gen)), 1
)
)
D_loss = D_loss_real + D_loss_gen + 1.85 * ent_real_loss
# gene. loss
pt_loss = pull_away_loss(D_h2_tar_gen)
y_tar= tf.placeholder(tf.int32, shape=[None, D_dim[3]])
T_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=D_logit_tar, labels=y_tar))
tar_thrld = tf.divide(tf.reduce_max(D_prob_tar_gen[:,-1]) +
tf.reduce_min(D_prob_tar_gen[:,-1]), 2)
indicator = tf.sign(
tf.subtract(D_prob_tar_gen[:,-1],
tar_thrld))
condition = tf.greater(tf.zeros_like(indicator), indicator)
mask_tar = tf.where(condition, tf.zeros_like(indicator), indicator)
G_ent_loss = tf.reduce_mean(tf.multiply(tf.log(D_prob_tar_gen[:,-1]), mask_tar))
fm_loss = tf.reduce_mean(
tf.sqrt(
tf.reduce_sum(
tf.square(D_logit_real - D_logit_gen), 1
)
)
)
G_loss = pt_loss + G_ent_loss + fm_loss
D_solver = tf.train.GradientDescentOptimizer(learning_rate=1e-3).minimize(D_loss, var_list=theta_D)
G_solver = tf.train.AdamOptimizer().minimize(G_loss, var_list=theta_G)
T_solver = tf.train.GradientDescentOptimizer(learning_rate=1e-3).minimize(T_loss, var_list=theta_T)
# Load data
# min_max_scaler = MinMaxScaler()
x_benign = norm_hid_rep # min_max_scaler.fit_transform(norm_hid_rep)
x_vandal = fraud_hid_rep # min_max_scaler.transform(fraud_hid_rep)
x_benign = sample_shuffle_uspv(x_benign)
x_vandal = sample_shuffle_uspv(x_vandal)
x_pre = x_benign
y_pre = np.zeros(len(x_pre))
y_pre = one_hot(y_pre, 2)
x_train = x_pre
y_real_mb = one_hot(np.zeros(mb_size), 2)
y_fake_mb = one_hot(np.ones(mb_size), 2)
x_test = x_benign.tolist() + x_vandal.tolist()
x_test = np.array(x_test)
y_test = np.zeros(len(x_test))
y_test[len(x_benign):] = 1
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# pre-training for target distribution
_ = sess.run(T_solver,
feed_dict={
X_tar:x_pre,
y_tar:y_pre
})
q = np.divide(len(x_train), mb_size)
d_ben_pro, d_fake_pro, fm_loss_coll = list(), list(), list()
f1_score = list()
d_val_pro = list()
n_round = 200
for n_epoch in range(n_round):
X_mb_oc = sample_shuffle_uspv(x_train)
for n_batch in range(int(q)):
_, D_loss_curr, ent_real_curr = sess.run([D_solver, D_loss, ent_real_loss],
feed_dict={
X_oc: X_mb_oc[n_batch*mb_size:(n_batch+1)*mb_size],
Z: sample_Z(mb_size, Z_dim),
y_real: y_real_mb,
y_gen: y_fake_mb
})
_, G_loss_curr, fm_loss_curr = sess.run([G_solver, G_loss, fm_loss],
feed_dict={Z: sample_Z(mb_size, Z_dim),
X_oc: X_mb_oc[n_batch*mb_size:(n_batch+1)*mb_size],
})
D_prob_real_, D_prob_gen_ = sess.run([D_prob_real, D_prob_gen],
feed_dict={X_oc: x_train,
Z: sample_Z(len(x_train), Z_dim)})
D_prob_vandal_ = sess.run(D_prob_real,
feed_dict={X_oc:x_vandal})
d_ben_pro.append(np.mean(D_prob_real_[:, 0]))
d_fake_pro.append(np.mean(D_prob_gen_[:, 0]))
d_val_pro.append(np.mean(D_prob_vandal_[:, 0]))
fm_loss_coll.append(fm_loss_curr)
prob, _ = sess.run([D_prob_real, D_logit_real], feed_dict={X_oc: x_test})
y_pred = np.argmax(prob, axis=1)
y_pred_prob = prob[:,1]
conf_mat = classification_report(y_test, y_pred, target_names=['genuine', 'fraud'], digits=4)
f1_score.append(float(list(filter(None, conf_mat.strip().split(" ")))[12]))
# print conf_mat
draw_trend(d_ben_pro, d_fake_pro, d_val_pro, fm_loss_coll, f1_score)
###Output
/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/sklearn/metrics/classification.py:1135: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.
'precision', 'predicted', average, warn_for)
###Markdown
OCAN vs. Simple Linear ClassifierThis section apply the trained OCAN classifier and a simple linear logistic classifier on the random samples drew from the training set previously. The linear classifier is trained on a different split of supervised train-val set drew from the overall training set, due to the fact that OCAN was trained with one-class data only.The results below show that for fraud case OCAN did not outperform the linear classifier significantly in-sample.
###Code
print ("OCAN: ")
print(conf_mat)
print ("Accuracy Score: ", accuracy_score(y_test, y_pred))
train_x, val_x, train_y, val_y = train_test_split(x_test, y_test, test_size=0.4)
clf = LogisticRegression(solver="lbfgs").fit(train_x, train_y)
pred_y = clf.predict(val_x)
pred_y_prob = clf.predict_proba(val_x)[:,1]
print ("")
print ("Linear Classifier: ")
print (classification_report(val_y, pred_y, target_names=['genuine', 'fraud'], digits=4))
print ("Accuracy Score: ", accuracy_score(val_y, pred_y))
fpr, tpr, thresh = roc_curve(val_y, pred_y_prob)
auc = roc_auc_score(val_y, pred_y_prob)
fpr2, tpr2, thresh2 = roc_curve(y_test, y_pred_prob)
auc2 = roc_auc_score(y_test, y_pred_prob)
plt.plot(fpr,tpr,label="linear in-sample, auc="+str(auc))
plt.plot(fpr2,tpr2,label="OCAN in-sample, auc="+str(auc2))
plt.legend(loc='best')
plt.show()
###Output
OCAN:
precision recall f1-score support
genuine 0.9157 0.9929 0.9527 700
fraud 0.9840 0.8280 0.8993 372
avg / total 0.9394 0.9356 0.9342 1072
Accuracy Score: 0.9356343283582089
Linear Classifier:
precision recall f1-score support
genuine 0.9509 0.9855 0.9679 275
fraud 0.9722 0.9091 0.9396 154
avg / total 0.9585 0.9580 0.9577 429
Accuracy Score: 0.958041958041958
###Markdown
Evaluation of Classifiers on Test SetEvaluate the two classifier on the previously reserved test set, the conclusion holds if we look at the ROC curve: OCAN did not outperform linear classifier significantly.
###Code
test_hid_rep = hidden_representation.predict(data_test.drop(['Class'], axis = 1).values)
test_y = data_test["Class"].values
prob_test, _ = sess.run([D_prob_real, D_logit_real], feed_dict={X_oc: test_hid_rep})
y_pred_test = np.argmax(prob_test, axis=1)
y_pred_prob_test = prob_test[:,1]
conf_mat_test = classification_report(test_y, y_pred_test, target_names=['genuine', 'fraud'], digits=4)
print ("OCAN: ")
print(conf_mat_test)
print ("Accuracy Score: ", accuracy_score(test_y, y_pred_test))
pred_y_test = clf.predict(test_hid_rep)
pred_y_prob_test = clf.predict_proba(test_hid_rep)[:,1]
print ("")
print ("Linear Classifier: ")
print (classification_report(test_y, pred_y_test, target_names=['genuine', 'fraud'], digits=4))
print ("Accuracy Score: ", accuracy_score(test_y, pred_y_test))
fpr, tpr, thresh = roc_curve(test_y, pred_y_prob_test)
auc = roc_auc_score(test_y, pred_y_prob_test)
fpr2, tpr2, thresh2 = roc_curve(test_y, y_pred_prob_test)
auc2 = roc_auc_score(test_y, y_pred_prob_test)
plt.plot(fpr,tpr,label="linear out-of-sample, auc="+str(auc))
plt.plot(fpr2,tpr2,label="OCAN out-of-sample, auc="+str(auc2))
plt.legend(loc='best')
plt.show()
###Output
OCAN:
precision recall f1-score support
genuine 0.9997 0.9904 0.9950 71082
fraud 0.1266 0.8250 0.2195 120
avg / total 0.9982 0.9901 0.9937 71202
Accuracy Score: 0.9901126372854695
Linear Classifier:
precision recall f1-score support
genuine 0.9998 0.9907 0.9952 71082
fraud 0.1352 0.8583 0.2336 120
avg / total 0.9983 0.9905 0.9939 71202
Accuracy Score: 0.9905058846661611
|
I Python Basics & Pandas/#01. Data Tables, Plots & Basic Concepts of Programming/01session_data-tables.ipynb | ###Markdown
01 | Basic Elements of Programming - Python + Data Science Tutorials in [YouTube ↗︎](https://www.youtube.com/c/PythonResolver) The Registry (aka the `environment`) > - Type `your name` and execute ↓
###Code
jesus
###Output
_____no_output_____
###Markdown
> - Type `sum` and execute ↓
###Code
len
sum
###Output
_____no_output_____
###Markdown
> - [ ] Why is it recognizing `sum` and not `your name`> - [ ] How can you literally tell `your name` so that Python will recognize it
###Code
'jesus'
###Output
_____no_output_____
###Markdown
Object-Oriented Programming > Which `objects` are predefined by Python?> - numbers> - text> - set of `objects`
###Code
type('hola')
###Output
_____no_output_____
###Markdown
string
###Code
type(12)
###Output
_____no_output_____
###Markdown
integer
###Code
type(12.3)
###Output
_____no_output_____
###Markdown
Use of Functions Functions inside Objects > - The `dog` makes `guau()`: `dog.guau()`> - The `cat` makes `miau()`: `cat.miau()`> - [ ] Can a `dog` make `miau()`: ~~`dog.miau()`~~ ?
###Code
texto = 'hola que tal'
type(texto)
lista = ['hola', 'que', 'tal']
type(lista)
texto
texto.title()
lista.title()
texto.title()
type(texto)
type(lista)
lista.append(3)
lista
###Output
_____no_output_____
###Markdown
Predefined Functions in Python (_Built-in_ Functions) > - https://docs.python.org/3/library/functions.html
###Code
texto.title()
type(texto)
title('algun texto')
sum
len
notas = [7, 8, 4]
notas.sum()
sum(notas)
7+8+4
len(notas)
mean(notas)
average(notas)
sum(notas)/len(notas)
###Output
_____no_output_____
###Markdown
Discipline to Search Solutions in Google > Apply the following steps when **looking for solutions in Google**:>> 1. **Necesity**: How to load an Excel in Python?> 2. **Search in Google**: by keywords> - `load excel python`> - ~~how to load excel in python~~> 3. **Solution**: What's the `function()` that loads an Excel in Python? External Functions > - [ ] Load [this Excel](https://github.com/sotastica/data/raw/main/internet_usage_spain.xlsx) into Python
###Code
df = pd.read_excel("sample.xlsx")
import pandas as pd
pd
pandas
df = pd.read_excel("sample.xlsx")
pd.read_excel('internet_usage_spain.xlsx')
###Output
_____no_output_____
###Markdown
Change Default Parameters of a Function > - By Default ↓
###Code
pd.read_excel(io='internet_usage_spain.xlsx', sheet_name=0)
###Output
_____no_output_____
###Markdown
> - [ ] Change the Default Object of the Parameter ↓
###Code
pd.read_excel(io='internet_usage_spain.xlsx', sheet_name=1)
pd.read_excel(io='internet_usage_spain.xlsx', hoja=1)
###Output
_____no_output_____
###Markdown
The Elements of Programming > - `module`: where the code of functions are stored.> - `function()`: execute several lines of code with one `word()`.> - `(parameter=?)`: to **configure** the function's behaviour.> - `object` | `instance` | `class`: **data structure** to store information. Code Syntax > In which order Python reads the line of code?> - From left to right.> - From up to down.
###Code
pd
###Output
_____no_output_____
###Markdown
1. `library`2. `.` **DOT NOTATION** para acceder a las funciones del modulo, a las funciones del objeto3. `function()`4. pasamos `objects` a los `parameters` - pasamos `str ("internet_usage_spain.xlsx")` al `parametro (io=?)` - pasamos `int (1)` al `parametro (sheet_name=?)`5. ejecutamos6. magia borras7. la `function()` nos devuelve un `object`
###Code
pd.read_excel(io='internet_usage_spain.xlsx', sheet_name=1)
type(pd.read_excel(io='internet_usage_spain.xlsx', sheet_name=1))
type(89)
type('un texto')
###Output
_____no_output_____
###Markdown
Python doesn't know about the Excel File> - Python just interprets the `string`> - and the `integer`
###Code
'internet_usage_spain.xlsx'
1
###Output
_____no_output_____
###Markdown
> - As you pass the `objects` to the `parameters` of the `function()`> > - `function(parameter=object)`
###Code
pd.read_excel(io='internet_usage_spain.xlsx', sheet_name=1)
###Output
_____no_output_____
###Markdown
Source Code Execution | What happens inside the computer ? > - When we type `pd.read_excel()`> - and execute `shift` + `enter`
###Code
pd.read_excel(io='internet_usage_spain.xlsx', sheet_name=1)
###Output
_____no_output_____
###Markdown
`~/miniforge3/lib/python3.9/site-packages/pandas/io/excel/_base.py` > - How Python locates the error when it doesn't find the `filename`?
###Code
pd.read_excel(io='sample.xlsx', sheet_name=1)
'internet_usage_spain.xlsx'
1
pd.read_excel(io='internet_usage_spain.xlsx', sheet_name=1)
###Output
_____no_output_____
###Markdown
Recap | Types of Functions Buit-in (Predefined) `functions()`
###Code
sum
len
type(sum)
###Output
_____no_output_____
###Markdown
External `functions()` from `modules`
###Code
pd.read_csv
pd.read_excel
type(pd.read_excel)
###Output
_____no_output_____
###Markdown
`functions()` within `instances`
###Code
df = pd.read_excel(io='internet_usage_spain.xlsx', sheet_name=1)
type(df)
df.hist()
df.boxplot()
df.describe()
pd.describe()
df.describe()
pd.des
###Output
_____no_output_____ |
notebook/credit_card_approval_pred.ipynb | ###Markdown
AutoClassifierThis notebook describes the development of a system that can perform auto data cleaning, dimensionality reduction, training, and testing of several classifiers for a single class classification problem. The motivation behind creating this product was to demonstrate the value that data-driven methods can bring with very little initial investment. The auto classifier system can even handle text using natural language processing. However, three assumptions are made for the dataset: 1) all text columns (that are not categorical in nature) have been combined into a single column, 2) all other columns can be converted into category, float, and/or int, 3) the class label column is the last column of the dataset. The AutoClassifier will perform the following functions: Conversion of qualified features to category variables Feature selection using LASSO (optional) Vectorizing text (optional) Transformation to integer representation Fitting and hyperparameter optimization of several 'shallow' classifiers Fitting DNN model Saving and/or loading model to/from external files The AutoClassifier requires minimal pre-processing of the data and provides a quick and easy to use front end for users. AutoClassifier used for auto credit card approval - case study descriptionThis case study focuses on building an automatic credit card approval predictor the AutoClassifier. The Credit Approval Data Set from the UCI Machine Learning Repository is used as an example dataset to demonstrate the methodology. Although the features labels are sanitized to maintain anonymity, expert opinions suggest that the feature labels may be: Gender, Age, Debt, Married_status, BankCustomer, EducationLevel, Ethnicity, YearsEmployed, NoPriorDefault, Employed, CreditScore, DriversLicense, Citizen, ZipCode, Income and ApprovalStatus.Credit card approval is a perfect case study for applied machine learning since the application approval process can be easily framed as a classification problem. The underlying pattern that differentiates between trustworthy customers and unreliable customers can be ascertained through the customer's credit and personal details. The conventional system for approvals were subjective and based on the bank manager's experience. Using machine learning, this subjective judgement can be supplemented with quantitave metrics that can lead to faster and more accurate approval processes. Sources: Data: Credit Approval Data Set, UCI Machine Learning Repository
###Code
%matplotlib inline
import pandas as pd
import numpy as np
df_all = pd.read_csv('../dat/cc_approvals_text.csv', na_values='?')
text_col = 'Tweet' #Column header that contains text
df_all.head()
# data includes various features that are considered before
# approving an individual’s credit card application.
# The last column is the Approval Status.
###Output
_____no_output_____
###Markdown
Data cleaning and pre-processingAs can be seen, the dataset requires some cleaning before it can be used for any exploratory analysis.
###Code
def convert_cat_cols(df, cat_var_limit=10, verbose=False):
"""
Converts columns with a small amount of unique values that are of
type Object into categorical variables.
Number of unique values defined by cat_var_limit
"""
cat_var_true = df.apply(lambda x:
len(x.value_counts()) < cat_var_limit)
object_type_true = df.apply(lambda x:
x.value_counts().index.dtype == 'O')
if cat_var_true[object_type_true].any():
df[cat_var_true[object_type_true].index] = \
df[cat_var_true[object_type_true].index].astype('category')
if verbose:
print(df[cat_var_true[object_type_true].index].describe())
return df
def impute_most_freq(df):
"""
Imputes the most frequent value in place of NaN's
"""
most_freq = df.apply(lambda x: x.value_counts().index[0])
return df.fillna(most_freq)
df = df_all.drop([text_col], axis=1) #saving all non text columns
df = convert_cat_cols(df, 10)
df = impute_most_freq(df)
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 690 entries, 0 to 689
Data columns (total 16 columns):
Gender 690 non-null category
Age 690 non-null float64
Debt 690 non-null float64
Married_status 690 non-null category
BankCustomer 690 non-null category
EducationLevel 690 non-null category
Ethnicity 690 non-null category
YearsEmployed 690 non-null float64
NoPriorDefault 690 non-null category
Employed 690 non-null category
CreditScore 690 non-null int64
DriversLicense 690 non-null category
Citizen 690 non-null category
ZipCode 690 non-null float64
Income 690 non-null int64
ApprovalStatus 690 non-null category
dtypes: category(10), float64(4), int64(2)
memory usage: 41.1 KB
###Markdown
All data cleaning is done. Next step is to perform preprocessing of the dataset for insertion into the SciKit library functions which require numeric values. We'll convert our dataset into a binary integer representation using pd.get_dummies as well as a 0 - n_class-1 integer representation using the Scikit Transformer LabelEncoder
###Code
df_bin = pd.get_dummies(df, drop_first=True)
#Converts category variables into a multicolumn binary integer
#representation for each unique value
def convert_str_int_labels(df):
"""
Converts columns with factors into integer representation
"""
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df = df.apply(lambda x: le.fit_transform(x))
return df
df[df.select_dtypes(include=['object']).columns] = \
convert_str_int_labels(df.select_dtypes(include=['object']))
#Transforms an object variable column into a integer variable column
print(df_bin.info(),'\n',df.info())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 690 entries, 0 to 689
Data columns (total 38 columns):
Age 690 non-null float64
Debt 690 non-null float64
YearsEmployed 690 non-null float64
CreditScore 690 non-null int64
ZipCode 690 non-null float64
Income 690 non-null int64
Gender_b 690 non-null uint8
Married_status_u 690 non-null uint8
Married_status_y 690 non-null uint8
BankCustomer_gg 690 non-null uint8
BankCustomer_p 690 non-null uint8
EducationLevel_c 690 non-null uint8
EducationLevel_cc 690 non-null uint8
EducationLevel_d 690 non-null uint8
EducationLevel_e 690 non-null uint8
EducationLevel_ff 690 non-null uint8
EducationLevel_i 690 non-null uint8
EducationLevel_j 690 non-null uint8
EducationLevel_k 690 non-null uint8
EducationLevel_m 690 non-null uint8
EducationLevel_q 690 non-null uint8
EducationLevel_r 690 non-null uint8
EducationLevel_w 690 non-null uint8
EducationLevel_x 690 non-null uint8
Ethnicity_dd 690 non-null uint8
Ethnicity_ff 690 non-null uint8
Ethnicity_h 690 non-null uint8
Ethnicity_j 690 non-null uint8
Ethnicity_n 690 non-null uint8
Ethnicity_o 690 non-null uint8
Ethnicity_v 690 non-null uint8
Ethnicity_z 690 non-null uint8
NoPriorDefault_t 690 non-null uint8
Employed_t 690 non-null uint8
DriversLicense_t 690 non-null uint8
Citizen_p 690 non-null uint8
Citizen_s 690 non-null uint8
ApprovalStatus_- 690 non-null uint8
dtypes: float64(4), int64(2), uint8(32)
memory usage: 54.0 KB
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 690 entries, 0 to 689
Data columns (total 16 columns):
Gender 690 non-null int32
Age 690 non-null float64
Debt 690 non-null float64
Married_status 690 non-null int32
BankCustomer 690 non-null int32
EducationLevel 690 non-null int32
Ethnicity 690 non-null int32
YearsEmployed 690 non-null float64
NoPriorDefault 690 non-null int32
Employed 690 non-null int32
CreditScore 690 non-null int64
DriversLicense 690 non-null int32
Citizen 690 non-null int32
ZipCode 690 non-null float64
Income 690 non-null int64
ApprovalStatus 690 non-null int32
dtypes: float64(4), int32(10), int64(2)
memory usage: 59.4 KB
None
None
###Markdown
Text cleaning and vectorizationThe text column will now be tokenized and converted into a corpus statistic, in this case using Tf-Idf and n-grams.
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
def clean_text(text_series):
"""
Cleans a column of Tweets. Removes all special characters,
websites, mentions.
"""
from re import sub as resub
text_series = text_series.apply(
lambda x:resub(
"[^A-Za-z0-9 ]+|(\w+:\/\/\S+)|htt", " ", x)
).str.strip().str.lower()
return text_series
df_all[text_col] = clean_text(df_all[text_col])
text_proc_pipe = TfidfVectorizer(max_features=100, ngram_range=(1,2),
stop_words='english')
text_numeric_matrix = text_proc_pipe.fit_transform(df_all[text_col])
df_all[text_col].head(10)
###Output
_____no_output_____
###Markdown
Splitting into train and test sets with auto feature selectionWith all numeric data, the next data-prep step will be to split the data into a training set and testing set. Ideally, no information from the test data should be used to scale the training data or should be used to direct the training process of a machine learning model. Hence, we first split the data and then apply the scaling. We will also performs feature selection on the dataframe. Feature selection is performed using the LASSO weight shrinking process. Features with coefficients that are around 0 are then rejected.
###Code
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Lasso
import matplotlib.pyplot as plt
def feat_select(df, text_mat=None, test_size_var=0.3,
alpha_space=np.linspace(0.01,0.02,20),
random_state_var=21, use_feat_select=True, plot=True):
"""
Performs feature selection on a dataframe with a single target
variable and n features. Test train split is also performed and only
splits of selected features are returned. Feature selection performed
using LASSO weight shrinking.
"""
x_train, x_test, y_train, y_test = train_test_split(
df.iloc[:,:-1], df.iloc[:,-1], test_size=test_size_var,
random_state=random_state_var, stratify=df.iloc[:,-1])
if use_feat_select:
param_grid = {'alpha': alpha_space}
lasso_gcv = GridSearchCV(Lasso(normalize=False), param_grid, cv=5,
n_jobs=-1, iid=True)
lasso_coeffs = lasso_gcv.fit(x_train, y_train).best_estimator_.coef_
if plot:
plt.barh(y=range(len(df.columns[:-1])),width=np.abs(lasso_coeffs)
,tick_label=df.columns[:-1].values)
plt.ylabel('Column features')
plt.xlabel('Coefficient score')
plt.xticks(rotation=90)
plt.show()
select_feats = df.columns[:-1][np.abs(lasso_coeffs) > 0].values
x_train = x_train.loc[:,select_feats]
x_test = x_test.loc[:,select_feats]
if text_mat is not None:
# Text data is concatenated here if present
x_train = np.concatenate((x_train.values,text_mat[x_train.index,:]),axis=1)
x_test = np.concatenate((x_test.values,text_mat[x_test.index,:]),axis=1)
else:
x_train = x_train.values
x_test = x_test.values
return x_train, x_test, y_train.values, y_test.values
x_train, x_test, y_train, y_test = feat_select(
df, text_numeric_matrix.toarray(), use_feat_select=True, plot=True)
###Output
_____no_output_____
###Markdown
Creating a transformation and analysis pipelineThe dataset can now be rescaled so that no feature can artificially bias the analysis. In this case, no specialized feature engineering is performed and all the feature variables will be rescaled broadly. Both the binary integer representation and the non-binary integer representation will be tested. Note 1: better scores can be expected from intelligently rescaling the data. For example, age can be standardized while credit scores can be rescaled between 0 and -1. Note 2: The above preprocessing functions can be integrated into the pipeline object using FunctionTransformer.
###Code
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
x_train, x_test, y_train, y_test = feat_select(
df, text_numeric_matrix.toarray(), use_feat_select=True, plot=True)
scaler = [('Scaler', MinMaxScaler(feature_range=(-1, 1))),\
('Scaler', Normalizer()),\
('Scaler', StandardScaler())]
classifiers = [('logreg', LogisticRegression(solver='lbfgs', max_iter=1000)),\
('knnstep', KNeighborsClassifier()),\
('svcstep', SVC(gamma='scale')),\
('gradbooststep', GradientBoostingClassifier(subsample=.8))]
parameters = {'logreg':{'logreg__C': [0.8,1,1.2,1.4]} ,\
'knnstep':{'knnstep__n_neighbors': np.arange(3,16)},\
'svcstep':{'svcstep__C': [0.5,1,1.5,2,2.5,2.6]},\
'gradbooststep':{'gradbooststep__max_depth': [2,3,4,5],\
'gradbooststep__n_estimators': \
[40,60,80,100]}}
model_dict = {}
for clf in classifiers:
pipeline = Pipeline([scaler[0],clf])
print('\nAnalysis for : ' + clf[0])
gcv = GridSearchCV(pipeline, param_grid=parameters[clf[0]], cv=5,
iid=True)
gcv.fit(x_train,y_train)
model_dict[clf[0]] = gcv
print(gcv.best_params_)
print(pd.DataFrame(gcv.cv_results_)[['mean_test_score','params']])
print('The score for ' + clf[0] + ' is ' + str(gcv.score(x_test,y_test)))
###Output
_____no_output_____
###Markdown
A similar analysis will now be done on the binary integer representation dataset.
###Code
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
x_train, x_test, y_train, y_test = feat_select(df_bin,text_numeric_matrix.toarray(),use_feat_select=True,plot=True)
scaler = [('Scaler', MinMaxScaler(feature_range=(-1, 1))),\
('Scaler', Normalizer()),\
('Scaler', StandardScaler())]
classifiers = [('logreg', LogisticRegression(solver='lbfgs',
max_iter=1000)),\
('knnstep', KNeighborsClassifier()),\
('svcstep', SVC(gamma='scale')),\
('gradbooststep', GradientBoostingClassifier(subsample=.8))]
parameters = {'logreg': {'logreg__C': [0.8,1,1.2,1.4]} ,\
'knnstep': {'knnstep__n_neighbors': np.arange(3,16)},\
'svcstep': {'svcstep__C': [0.5,1,1.5,2,2.5,2.6]},\
'gradbooststep': {'gradbooststep__max_depth': [2,3,4,5],\
'gradbooststep__n_estimators': \
[40,60,80,100]}}
model_dict = {}
for clf in classifiers:
pipeline = Pipeline([scaler[0],clf])
print('\nAnalysis for : ' + clf[0])
gcv = GridSearchCV(pipeline, param_grid=parameters[clf[0]], cv=5,
iid=True)
gcv.fit(x_train,y_train)
model_dict[clf[0]] = gcv
print(gcv.best_params_)
print(pd.DataFrame(gcv.cv_results_)[['mean_test_score','params']])
print('The score for ' + clf[0] + ' is ' + str(gcv.score(x_test,y_test)))
###Output
_____no_output_____
###Markdown
Exploring Deep Neural Networks (DNNs) for classification In this particular case, it would seem that we do not have enough data to satisfactorily train a DNN. However, we will build a 6 layer DNN with a ramped layer node architecture for comparison with the 'shallow' learning case. The binary integer representation dataset will be used without feature selection to maximize data use.
###Code
from keras.layers import Dense
from keras.models import Sequential
from keras.callbacks import EarlyStopping
x_train, x_test, y_train, y_test = feat_select(
df_bin, text_numeric_matrix.toarray(), use_feat_select=False,
plot=False)
steps = ('Scaler', MinMaxScaler(feature_range=(-1, 1)))
steps_norm = ('Scaler', Normalizer())
steps_stand = ('Scaler', StandardScaler())
scaler = steps[1]
x_train = scaler.fit_transform(x_train)
x_test = scaler.fit_transform(x_test)
n_cols = x_train.shape[-1]
model = Sequential()
model.add(Dense(5, activation='relu', input_shape=(n_cols,)))
model.add(Dense(10, activation='relu'))
model.add(Dense(15, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(15, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(2, activation='softmax'))
early_stop_monitor = EarlyStopping(patience=2)
model.compile (optimizer='adam',
loss='categorical_crossentropy', metrics=['accuracy'])
train_dp_model = model.fit(x_train, pd.get_dummies(y_train).values,
validation_split = .2, epochs = 20,
callbacks =[early_stop_monitor],
verbose=True)
print('Loss metrics: ' + str(train_dp_model.history['loss'][-1]))
pred_prob = model.predict(x_test)
accuracy_dp = np.sum((pred_prob[:,1]>=0.5)==y_test) / len(y_test)
print('Testing accuracy: ' + str(accuracy_dp))
###Output
Using TensorFlow backend.
WARNING: Logging before flag parsing goes to stderr.
W1110 18:31:33.717415 4436 deprecation_wrapper.py:119] From C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
W1110 18:31:33.748692 4436 deprecation_wrapper.py:119] From C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
W1110 18:31:33.748692 4436 deprecation_wrapper.py:119] From C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
W1110 18:31:33.951794 4436 deprecation_wrapper.py:119] From C:\ProgramData\Anaconda3\lib\site-packages\keras\optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.
W1110 18:31:33.998696 4436 deprecation_wrapper.py:119] From C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:3295: The name tf.log is deprecated. Please use tf.math.log instead.
W1110 18:31:34.264324 4436 deprecation.py:323] From C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
W1110 18:31:34.436179 4436 deprecation_wrapper.py:119] From C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.
###Markdown
As we can see with results above, the accuracy of the DNN classifier appears to be quite close to the logistic regression classifier accuracy with minimal feature engineering and with comparatively very little data. AutoClassifier Backend PipelineIncludes error handling using assertions and try/except clauses.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
def convert_cat_cols(df, cat_var_limit=10, verbose=False):
"""
Converts columns with a small amount of unique values that are of
type Object into categorical variables.
Number of unique values defined by cat_var_limit
"""
cat_var_true = df.apply(lambda x:
len(x.value_counts()) < cat_var_limit)
object_type_true = df.apply(lambda x:
x.value_counts().index.dtype == 'O')
if cat_var_true[object_type_true].any():
df[cat_var_true[object_type_true].index] = \
df[cat_var_true[object_type_true].index].astype('category')
if verbose:
print(df[cat_var_true[object_type_true].index].describe())
return df
def impute_most_freq(df):
"""
Imputes the most frequent value in place of NaN's
"""
most_freq = df.apply(lambda x: x.value_counts().index[0])
return df.fillna(most_freq)
def convert_str_int_labels(df):
"""
Converts columns with factors into integer representation
"""
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df = df.apply(lambda x: le.fit_transform(x))
return df
def clean_text(text_series):
"""
Cleans a column of Tweets. Removes all special characters,
websites, mentions.
"""
from re import sub as resub
text_series = text_series.apply(
lambda x:resub(
"[^A-Za-z0-9 ]+|(\w+:\/\/\S+)|htt", " ", x)
).str.strip().str.lower()
return text_series
def feat_select(df, text_mat=None, test_size_var=0.3,
alpha_space=np.linspace(0.01,0.02,20),
random_state_var=21, use_feat_select=True, plot=True):
"""
Performs feature selection on a dataframe with a single target
variable and n features. Test train split is also performed and only
splits of selected features are returned. Feature selection performed
using LASSO weight shrinking.
"""
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Lasso
x_train, x_test, y_train, y_test = train_test_split(
df.iloc[:,:-1], df.iloc[:,-1], test_size=test_size_var,
random_state=random_state_var, stratify=df.iloc[:,-1])
if use_feat_select:
param_grid = {'alpha': alpha_space}
lasso_gcv = GridSearchCV(Lasso(normalize=False), param_grid, cv=5,
n_jobs=-1, iid=True)
lasso_coeffs = lasso_gcv.fit(x_train, y_train).best_estimator_.coef_
if plot:
plt.barh(y=range(len(df.columns[:-1])), width=np.abs(lasso_coeffs),
tick_label=df.columns[:-1].values)
plt.ylabel('Column features')
plt.xlabel('Coefficient score')
plt.xticks(rotation=90)
plt.show()
try:
select_feats = df.columns[:-1][np.abs(lasso_coeffs) > 0].values
except:
print('Lasso Coefficients all turned out to be 0')
print(' or could not be calculated. Check your')
print(' dataset or switch off feature selection.')
x_train = x_train.loc[:,select_feats]
x_test = x_test.loc[:,select_feats]
if text_mat is not None:
x_train = np.concatenate((x_train.values,text_mat[x_train.index,:]), axis=1)
x_test = np.concatenate((x_test.values,text_mat[x_test.index,:]), axis=1)
else:
x_train = x_train.values
x_test = x_test.values
return x_train, x_test, y_train.values, y_test.values
def preprocess_block(df_all, text_col=None, cat_var_limit=10, bin_rep=1,
max_tfidf_features=100, ngram_range=(1,2),
use_feat_select=True,
alpha_space=np.linspace(0.01,0.02,20),
random_state_var=20, test_size_var=.3):
"""
Preprocessing block: used to preprocess and transform the data columns
---------------------------------------------------------------------------
-df_all (DataFrame): DataFrame with all the data, last column should be
target variable
-text_col (str): name of the text column, default is None for no text columns
-cat_var_limit (int): greatest number of unique values in a column to qualify
for conversion into a category column
-bin_rep (int): style of integer representation for category variables, 0 for
binary integer representation, 1 for 0 to nclass-1 representation
-max_tfidf_features (int): maximum number of features after vectorizing text
column using tfidf metric
-ngram_range (tuple): 2 tuple consisting of start and end point of ngram
-use_feat_select (bool): True for applying feature selection using LASSO for
non-text columns
-alpha_space (array of float): testing space for alpha parameter of LASSO
-random_state_var (int): Random seed for train-test-split
-test_size_var (float): ratio of test versus train split
---------------------------------------------------------------------------
"""
if text_col is not None:
df = df_all.drop([text_col], axis=1) #All columns except text
else:
df = df_all
df = impute_most_freq(convert_cat_cols(df,10))
if bin_rep:
try:
df[df.select_dtypes(include=['category']).columns] = \
convert_str_int_labels(df.select_dtypes(include=['category']))
#Transforms a category variable column into an integer variable
#column
except:
print('No columns with category variables. Change bin_rep to 0')
else:
df = pd.get_dummies(df,drop_first=True)
#Converts category variables into a multicolumn binary integer
#representation for each unique value
assert (df.notnull().all().all()), 'NaNs present in DataFrame'
if text_col is not None:
from sklearn.feature_extraction.text import TfidfVectorizer
try:
df_all[text_col] = clean_text(df_all[text_col])
except:
print('Cannot clean text, recheck text column')
text_numeric_matrix = TfidfVectorizer(max_features=max_tfidf_features,
ngram_range=(1,2),
stop_words='english')\
.fit_transform(df_all[text_col])
return feat_select(df, text_numeric_matrix.toarray(),
test_size_var=test_size_var,
use_feat_select=use_feat_select,
alpha_space=alpha_space,
random_state_var=random_state_var, plot=True)
else:
return feat_select(df, use_feat_select=use_feat_select,
test_size_var=test_size_var,
alpha_space=alpha_space, plot=True,
random_state_var=random_state_var)
def shallow_model(x_train, x_test, y_train, y_test, scaler_ch=0,
logreg_C=[0.8,1,1.2,1.4], knn_neigh=np.arange(3,16),
svc_c=[0.5,1,1.5,2,2.5,2.6], gb_max_depth=[2,3,4,5],
gb_n_est=[40,60,80,100], verbose=True, save=True,
model_file='Trained_shallow_models.sav'):
"""
This function will fit and test several shallow classfication models and
save them, models include:
'logreg': Logistic Regression using the lbfgs solver
'knnstep': K Nearest Neighbors
'svcstep': Support Vector Classification model
'gradbooststep': Gradient Boosted Classification Trees
Scaling options include:
MinMaxScaler between a range of -1 and 1
Normalizer
StandardScaler
The function will also run a 5 fold cross validated grid search for
hyperparameter optimization
---------------------------------------------------------------------------
-x_train (DataFrame or ndarray): Training data consisting of features
-x_test (DataFrame or ndarray): Testing data consisting of features
-y_train (DataFrame, Series or ndarray): Training data for predictions
(single class only)
-y_train (DataFrame, Series or ndarray): Testing data for predictions
(single class only)
-scaler_ch (int): Decides which scaler to use, 0 for MinMaxScaler, 1 for
Normalizer, 2 for StandarScaler
-logreg_C (list of float): Hyperparameter space for C to be used in the
Log Reg Classifier
-knn_neigh (list of int): Hyperparameter space for number of neighbors to
be used in the KNN Classifier
-svc_c (list of float): Hyperparameter space for C to be used in the
Support Vector Classifier
-gb_max_depth (list of int): Hyperparameter space for max depth to be used
in Gradient Boosted Classifier Trees
-gb_n_est (list of int): Hyperparameter space for number of estimators to
be used in Gradient Boosted Classifier Trees
-verbose (bool): Prints out details if True
-save (bool): Switch for saving the trained models in an external data file
-model_file (str): Filename for storing all the trained models
---------------------------------------------------------------------------
"""
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.svm import SVC
import pickle
scaler = [('Scaler', MinMaxScaler(feature_range=(-1, 1))),\
('Scaler', Normalizer()), ('Scaler', StandardScaler())]
classifiers = [('logreg', LogisticRegression(solver='lbfgs',
max_iter=1000)),\
('knnstep', KNeighborsClassifier()),\
('svcstep', SVC(gamma='scale')),\
('gradbooststep', GradientBoostingClassifier(subsample=.8))]
parameters = {'logreg': {'logreg__C': logreg_C} ,\
'knnstep': {'knnstep__n_neighbors': knn_neigh},\
'svcstep': {'svcstep__C': svc_c},\
'gradbooststep': {'gradbooststep__max_depth': gb_max_depth,\
'gradbooststep__n_estimators': \
gb_n_est}}
model_dict = {}
for clf in classifiers:
pipeline = Pipeline([scaler[scaler_ch], clf])
print('\nAnalysis for : ' + clf[0])
gcv = GridSearchCV(pipeline, param_grid=parameters[clf[0]],
cv=5, iid=True)
gcv.fit(x_train, y_train)
model_dict[clf[0]] = (gcv, gcv.score(x_test, y_test))
if verbose:
print('The best parameters for: ' + clf[0] + ' are :' +
str(gcv.best_params_))
print(pd.DataFrame(gcv.cv_results_)
[['mean_test_score','params']])
print('The score for ' + clf[0] + ' is ' +
str(gcv.score(x_test, y_test)))
if save:
pickle.dump(model_dict, open(model_file, 'wb'))
return model_dict
def deep_model(x_train, x_test, y_train, y_test, scaler_ch=0,
patience_val=2, validation_split_val=.2, epochs_val=20,
verbose=True, save=True, model_file='Trained_deep_model.h5'):
"""
This function will fit and test a Deep Neural Network that uses ReLu
and softmax activation functions. It also uses an EarlyStopper
Scaling options include:
MinMaxScaler between a range of -1 and 1
Normalizer
StandardScaler
---------------------------------------------------------------------
-x_train (DataFrame or ndarray): Training data consisting of features
-x_test (DataFrame or ndarray): Testing data consisting of features
-y_train (DataFrame, Series or ndarray): Training data for predictions
(single class only)
-y_train (DataFrame, Series or ndarray): Testing data for predictions
(single class only)
-scaler_ch (int): Decides which scaler to use, 0 for MinMaxScaler,
1 for Normalizer, 2 for StandarScaler
-patience_val (int): Number of epochs to monitor before exiting
training if no major changes in accuracy occurs
-validation_split_val (float): ratio of split of dataset for testing
purposes
-epochs_val (int): Max number of epochs to train
-verbose (bool): Model training details will be printed out if True
-save (bool): Switch for saving the trained models in an external data
file
-model_file (str): Filename for storing the trained model. Must be H5
extension
----------------------------------------------------------------------
"""
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import StandardScaler
from keras.layers import Dense
from keras.models import Sequential
from keras.callbacks import EarlyStopping
scaler_list = [('Scaler', MinMaxScaler(feature_range=(-1, 1))),\
('Scaler', Normalizer()),\
('Scaler', StandardScaler())]
scaler = scaler_list[scaler_ch][1]
x_train = scaler.fit_transform(x_train)
x_test = scaler.fit_transform(x_test)
n_cols = x_train.shape[-1]
model = Sequential()
model.add(Dense(5, activation='relu', input_shape=(n_cols,)))
model.add(Dense(10, activation='relu'))
model.add(Dense(15, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(15, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(2,activation='softmax'))
early_stop_monitor = EarlyStopping(patience=patience_val)
model.compile (optimizer='adam',
loss= 'categorical_crossentropy',
metrics=['accuracy'])
train_dp_model=model.fit(x_train, pd.get_dummies(y_train).values,
validation_split=validation_split_val,
epochs = epochs_val,
callbacks =[early_stop_monitor],
verbose=verbose)
print('Loss metrics: ' + str(train_dp_model.history['loss'][-1]))
pred_prob = model.predict(x_test)
accuracy_dp = np.sum((pred_prob[:,1]>=0.5)==y_test) / len(y_test)
print('Testing accuracy: ' + str(accuracy_dp))
if save:
model.save(model_file)
return model
def load_model(type='shallow', filename='Trained_shallow_models.sav',
clf='logreg'):
"""
This function is used to load a previously saved trained model.
The model will have been saved in an external file.
---------------------------------------------------------------------
-type (str): 'shallow' to load a trained shallow model,
'deep' to load a trained deep model
-filename (str): Name of the file with the saved model
-clf (str): Only used for retrieving shallow models, this is the label
of the classifier -
'logreg': Logistic Regression using the lbfgs solver
'knnstep': K Nearest Neighbors
'svcstep': Support Vector Classification model
'gradbooststep': Gradient Boosted Classification Trees
---------------------------------------------------------------------
"""
assert (type == 'shallow' or type == 'deep'), 'Wrong input for type'
if type == 'shallow':
import pickle
try:
model_dict = pickle.load(open(filename, 'rb'))
except:
print('Could not load file. Check filename.')
for label, model in model_dict.items():
print(label + ' score is: ' + str(model[1]))
return model_dict[clf][0]
elif type == 'deep':
from keras.models import load_model
try:
model = load_model(filename)
except:
print('Could not load file. Check filename.')
print('Model Summary: ')
model.summary()
return model
df_all = pd.read_csv('../dat/cc_approvals_text.csv', na_values='?')
text_col = 'Tweet' #CSV column that is text
x_train, x_test, y_train, y_test = preprocess_block(df_all,'Tweet')
_=shallow_model(x_train, x_test, y_train, y_test)
_=deep_model(x_train, x_test, y_train, y_test)
clf_shallow = load_model(clf='logreg')
clf_deep = load_model('deep','Trained_deep_model.h5')
print(clf_shallow.predict(x_test))
print(clf_deep.predict(x_test))
df_all = pd.read_csv('../dat/cc_approvals_text.csv',na_values='?')
df_all = df_all.drop(['Tweet'],axis=1)
x_train, x_test, y_train, y_test = preprocess_block(df_all, bin_rep=0)
_ = shallow_model(x_train, x_test, y_train, y_test)
_ = deep_model(x_train, x_test, y_train, y_test)
clf_shallow = load_model(clf='logreg')
clf_deep = load_model('deep','Trained_deep_model.h5')
print(clf_shallow.predict(x_test))
print(clf_deep.predict(x_test))
###Output
_____no_output_____ |
day2/Advanced_grouping_and_aggregations.ipynb | ###Markdown
*Advanced grouping and aggregations*Let's start installing and importing Beam
###Code
%pip install -q apache-beam[interactive] --no-warn-conflicts
###Output
_____no_output_____
###Markdown
In case you get any error running the next cell, restart the runtime (either "*Runtime/Restart runtime*" in the top bar or *Ctrl+M*)
###Code
import apache_beam as beam
from apache_beam import pvalue
from apache_beam import Create, FlatMap, Map, ParDo, Filter, Flatten
from apache_beam import CombineGlobally, CombinePerKey
from apache_beam.transforms.combiners import Top, Mean, Count
from apache_beam import pvalue, window, WindowInto
import logging
from apache_beam.runners.interactive.interactive_runner import InteractiveRunner
import apache_beam.runners.interactive.interactive_beam as ib
###Output
_____no_output_____
###Markdown
Some of the basic combiner functions are already built-in:- **`Count`** takes a `PCollection` and outputs the amount of elements. - **`Top`** outputs the *n* largest/smallest of a `PCollection` given a comparison. - **`Mean`** outputs the arithmetic mean of a `PCollection`.Combiners can aggregate using the whole `PCollection` or by key using methods:- **`.Globally`** applies the combiner to the whole `PCollection`.- **`.PerKey`** applies the combiner for each key-value in the `Pcollection`.
###Code
p = beam.Pipeline(InteractiveRunner())
elements = [
{"country": "China", "population": 1389, "continent": "Asia"},
{"country": "India", "population": 1311, "continent": "Asia"},
{"country": "Japan", "population": 126, "continent": "Asia"},
{"country": "USA", "population": 331, "continent": "America"},
{"country": "Ireland", "population": 5, "continent": "Europe"},
{"country": "Indonesia", "population": 273, "continent": "Asia"},
{"country": "Brazil", "population": 212, "continent": "America"},
{"country": "Egypt", "population": 102, "continent": "Africa"},
{"country": "Spain", "population": 47, "continent": "Europe"},
{"country": "Ghana", "population": 31, "continent": "Africa"},
{"country": "Australia", "population": 25, "continent": "Oceania"},
]
create = (p | "Create" >> Create(elements)
| "Map Keys" >> Map(lambda x: (x['continent'], x['population'])))
element_count_total = create | "Total Count" >> Count.Globally()
element_count_grouped = create | "Count Per Key" >> Count.PerKey()
top_grouped = create | "Top" >> Top.PerKey(n=2) # We get the top 2
mean_grouped = create | "Mean" >> Mean.PerKey()
ib.show_graph(p)
ib.show(element_count_total, element_count_grouped, top_grouped, mean_grouped)
###Output
_____no_output_____
###Markdown
We can also create our own **Combiners** and apply them both `Globally` and `PerKey`
###Code
p = beam.Pipeline(InteractiveRunner())
elements = ["Lorem ipsum dolor sit amet. Consectetur adipiscing elit",
"Sed eu velit nec sem vulputate loborti",
"In lobortis augue vitae sagittis molestie. Mauris volutpat tortor non purus elementum",
"Ut blandit massa et risus sollicitudin auctor"]
combine = (p | "Create" >> Create(elements)
| "Join" >> CombineGlobally(lambda x: ". ".join(x)))
ib.show(combine)
p = beam.Pipeline(InteractiveRunner())
elements = [
("Latin", "Lorem ipsum dolor sit amet. Consectetur adipiscing elit. Sed eu velit nec sem vulputate loborti"),
("Latin", "In lobortis augue vitae sagittis molestie. Mauris volutpat tortor non purus elementum"),
("English", "But as the riper should by time decease"),
("English", "That thereby beauty's rose might never die"),
("English", "From fairest creatures we desire increase"),
("Spanish", "tiempo que vivía un hidalgo de los de lanza en astillero, awindow_pcdarga antigua"),
("Spanish", "En un lugar de la Mancha, de cuyo nombre no quiero acordarme, no ha mucho"),
]
combine_key = (p | "Create" >> Create(elements)
| "Join By Language" >> CombinePerKey(lambda x: ". ".join(x)))
ib.show(combine_key)
###Output
_____no_output_____
###Markdown
**Combiners** also work on a window basis
###Code
p = beam.Pipeline(InteractiveRunner())
scores = [
{"player": "Marina", "score": 1000, "timestamp": 0},
{"player": "Cristina", "score": 2000, "timestamp": 10},
{"player": "Cristina", "score": 2000, "timestamp": 50},
{"player": "Marina", "score": 3000, "timestamp": 110},
{"player": "Juan", "score": 2000, "timestamp": 90},
{"player": "Cristina", "score": 2000, "timestamp": 80},
{"player": "Juan", "score": 1000, "timestamp": 100},
]
create = (p | "Create" >> Create(scores)
| "Add timestamps" >> Map(lambda x: window.TimestampedValue(x, x["timestamp"]))
| "To KV" >> Map(lambda x: (x["player"], x["score"]))
)
windowed = create | "FixedWindow" >> WindowInto(window.FixedWindows(60))
total_key = windowed | "Total Per Key" >> CombinePerKey(sum)
ib.show(total_key, include_window_info=True)
###Output
_____no_output_____
###Markdown
When using **windows** and **global combiners** we need to add `without_defaults`. This is because the default behaviour is to return a `PCollection` of one element for empty windows.
###Code
total = (windowed | Map(lambda x: x[1])
| "Total" >> CombineGlobally(sum).without_defaults())
ib.show(total, include_window_info=True)
###Output
_____no_output_____
###Markdown
---Let's try now to create our own `Combiner`. We are going to try to make our copy of `Mean` (i.e., a `Combiner` that calculates the average).
###Code
p = beam.Pipeline(InteractiveRunner())
def average_fn(elements):
# print(elements)
list_elements = list(elements)
return sum(list_elements)/len(list_elements)
average = (p | "Create" >> Create(range(1000))
| CombineGlobally(average_fn))
ib.show(average)
###Output
_____no_output_____
###Markdown
We can see that output is wrong, the average of the first 100 non-negative integers is not 93.95. But why do we get that value?
###Code
sum(range(100)) / 100
###Output
_____no_output_____
###Markdown
We are going to need to use the combiner interface:Solution```p = beam.Pipeline(InteractiveRunner())class AverageFn(beam.CombineFn): def create_accumulator(self): sum = 0 count = 0 return sum, count def add_input(self, accumulator, input): return accumulator[0] + input, accumulator[1] + 1 def merge_accumulators(self, accumulators): sums = [x[0] for x in accumulators] counts = [x[1] for x in accumulators] return (sum(sums), sum(counts)) def extract_output(self, final_accumulator): if final_accumulator[1] != 0: return final_accumulator[0] / final_accumulator[1] else: passaverage = (p | "Create" >> Create(range(100)) | CombineGlobally(AverageFn()))ib.show(average)``` Streaming ExampleWe'll see this in Dataflow
###Code
p = beam.Pipeline(DataflowRunner(), options)
topic = "projects/pubsub-public-data/topics/taxirides-realtime"
def first_and_last(element):
key = element[0]
dictionaries = element[1]
output_row = {}
output_row["ride_id"] = key
if len(dictionaries) == 2:
for row in dictionaries:
if row["ride_status"] == "dropoff":
output_row["dropoff"] = row["timestamp"]
if row["ride_status"] == "pickup":
output_row["pickup"] = row["timestamp"]
logging.info(f"Final row {output_row}")
return output_row
else:
logging.warning(f"Length was {len(dictionaries)}")
pass
pubsub = (p | "Read Topic" >> ReadFromPubSub(topic=topic)
| "Json Loads" >> Map(json.loads)
| "Filter" >> Filter(lambda x: x["ride_status"] != "enroute")
| "Parse" >> Map(lambda x: (x["ride_id"], {"ride_status": x["ride_status"], "timestamp": x["timestamp"]})) # KV of ride id, dict
| "Session window" >> WindowInto(window.Sessions(3600),
trigger=trigger.Repeatedly(trigger.AfterCount(2)),
accumulation_mode=trigger.AccumulationMode.DISCARDING
)
| "Combine" >> CombinePerKey(ToListCombineFn())
| Map(first_and_last)
)
p.run()
###Output
_____no_output_____
###Markdown
*Advanced grouping and aggregations*Let's start installing and importing Beam
###Code
%pip install -q apache-beam[interactive] --no-warn-conflicts
import apache_beam as beam
from apache_beam import pvalue
from apache_beam import Create, FlatMap, Map, ParDo, Filter, Flatten
from apache_beam import CombineGlobally, CombinePerKey
from apache_beam.transforms.combiners import Top, Mean, Count
from apache_beam import pvalue, window, WindowInto
import logging
from apache_beam.runners.interactive.interactive_runner import InteractiveRunner
import apache_beam.runners.interactive.interactive_beam as ib
###Output
_____no_output_____
###Markdown
Some of the basic combiner functions are already built-in:- **`Count`** takes a `PCollection` and outputs the amount of elements. - **`Top`** outputs the *n* largest/smallest of a `PCollection` given a comparison. - **`Mean`** outputs the arithmetic mean of a `PCollection`.Combiners can aggregate using the whole `PCollection` or by key using methods:- **`.Globally`** applies the combiner to the whole `PCollection`.- **`.PerKey`** applies the combiner for each key-value in the `Pcollection`.
###Code
p = beam.Pipeline(InteractiveRunner())
elements = [
{"country": "China", "population": 1389, "continent": "Asia"},
{"country": "India", "population": 1311, "continent": "Asia"},
{"country": "Japan", "population": 126, "continent": "Asia"},
{"country": "USA", "population": 331, "continent": "America"},
{"country": "Ireland", "population": 5, "continent": "Europe"},
{"country": "Indonesia", "population": 273, "continent": "Asia"},
{"country": "Brazil", "population": 212, "continent": "America"},
{"country": "Egypt", "population": 102, "continent": "Africa"},
{"country": "Spain", "population": 47, "continent": "Europe"},
{"country": "Ghana", "population": 31, "continent": "Africa"},
{"country": "Australia", "population": 25, "continent": "Oceania"},
]
create = (p | "Create" >> Create(elements)
| "Map Keys" >> Map(lambda x: (x['continent'], x['population'])))
element_count_total = create | "Total Count" >> Count.Globally()
element_count_grouped = create | "Count Per Key" >> Count.PerKey()
top_grouped = create | "Top" >> Top.PerKey(n=2) # We get the top 2
mean_grouped = create | "Mean" >> Mean.PerKey()
ib.show_graph(p)
ib.show(element_count_total, element_count_grouped, top_grouped, mean_grouped)
###Output
_____no_output_____
###Markdown
We can also create our own **Combiners** and apply them both `Globally` and `PerKey`
###Code
p = beam.Pipeline(InteractiveRunner())
elements = ["Lorem ipsum dolor sit amet. Consectetur adipiscing elit",
"Sed eu velit nec sem vulputate loborti",
"In lobortis augue vitae sagittis molestie. Mauris volutpat tortor non purus elementum",
"Ut blandit massa et risus sollicitudin auctor"]
combine = (p | "Create" >> Create(elements)
| "Join" >> CombineGlobally(lambda x: ". ".join(x)))
ib.show(combine)
p = beam.Pipeline(InteractiveRunner())
elements = [
("Latin", "Lorem ipsum dolor sit amet. Consectetur adipiscing elit. Sed eu velit nec sem vulputate loborti"),
("Latin", "In lobortis augue vitae sagittis molestie. Mauris volutpat tortor non purus elementum"),
("English", "But as the riper should by time decease"),
("English", "That thereby beauty's rose might never die"),
("English", "From fairest creatures we desire increase"),
("Spanish", "tiempo que vivía un hidalgo de los de lanza en astillero, awindow_pcdarga antigua"),
("Spanish", "En un lugar de la Mancha, de cuyo nombre no quiero acordarme, no ha mucho"),
]
combine_key = (p | "Create" >> Create(elements)
| "Join By Language" >> CombinePerKey(lambda x: ". ".join(x)))
ib.show(combine_key)
###Output
_____no_output_____
###Markdown
**Combiners** also work on a window basis
###Code
p = beam.Pipeline(InteractiveRunner())
scores = [
{"player": "Marina", "score": 1000, "timestamp": 0},
{"player": "Cristina", "score": 2000, "timestamp": 10},
{"player": "Cristina", "score": 2000, "timestamp": 50},
{"player": "Marina", "score": 3000, "timestamp": 110},
{"player": "Juan", "score": 2000, "timestamp": 90},
{"player": "Cristina", "score": 2000, "timestamp": 80},
{"player": "Juan", "score": 1000, "timestamp": 100},
]
create = (p | "Create" >> Create(scores)
| "Add timestamps" >> Map(lambda x: window.TimestampedValue(x, x["timestamp"]))
| "To KV" >> Map(lambda x: (x["player"], x["score"]))
)
windowed = create | "FixedWindow" >> WindowInto(window.FixedWindows(60))
total_key = windowed | "Total Per Key" >> CombinePerKey(sum)
ib.show(total_key, include_window_info=True)
###Output
_____no_output_____
###Markdown
When using **windows** and **global combiners** we need to add `without_defaults`. This is because the default behaviour is to return a `PCollection` of one element for empty windows.
###Code
total = (windowed | Map(lambda x: x[1])
| "Total" >> CombineGlobally(sum).without_defaults())
ib.show(total, include_window_info=True)
###Output
_____no_output_____
###Markdown
---Let's try now to create our own `Combiner`. We are going to try to make our copy of `Mean` (i.e., a `Combiner` that calculates the average).
###Code
p = beam.Pipeline(InteractiveRunner())
def average_fn(elements):
# print(elements)
list_elements = list(elements)
return sum(list_elements)/len(list_elements)
average = (p | "Create" >> Create(range(1000))
| CombineGlobally(average_fn))
ib.show(average)
###Output
_____no_output_____
###Markdown
We can see that output is wrong, the average of the first 100 non-negative integers is not 93.95. But why do we get that value?
###Code
sum(range(100)) / 100
###Output
_____no_output_____
###Markdown
We are going to need to use the combiner interface:Solution```p = beam.Pipeline(InteractiveRunner())class AverageFn(beam.CombineFn): def create_accumulator(self): sum = 0 count = 0 return sum, count def add_input(self, accumulator, input): return accumulator[0] + input, accumulator[1] + 1 def merge_accumulators(self, accumulators): sums = [x[0] for x in accumulators] counts = [x[1] for x in accumulators] return (sum(sums), sum(counts)) def extract_output(self, final_accumulator): if final_accumulator[1] != 0: return final_accumulator[0] / final_accumulator[1] else: passaverage = (p | "Create" >> Create(range(100)) | CombineGlobally(AverageFn()))ib.show(average)``` Streaming ExampleWe'll see this in Dataflow
###Code
p = beam.Pipeline(DataflowRunner(), options)
topic = "projects/pubsub-public-data/topics/taxirides-realtime"
def first_and_last(element):
key = element[0]
dictionaries = element[1]
output_row = {}
output_row["ride_id"] = key
if len(dictionaries) == 2:
for row in dictionaries:
if row["ride_status"] == "dropoff":
output_row["dropoff"] = row["timestamp"]
if row["ride_status"] == "pickup":
output_row["pickup"] = row["timestamp"]
logging.info(f"Final row {output_row}")
return output_row
else:
logging.warning(f"Length was {len(dictionaries)}")
pass
pubsub = (p | "Read Topic" >> ReadFromPubSub(topic=topic)
| "Json Loads" >> Map(json.loads)
| "Filter" >> Filter(lambda x: x["ride_status"] != "enroute")
| "Parse" >> Map(lambda x: (x["ride_id"], {"ride_status": x["ride_status"], "timestamp": x["timestamp"]})) # KV of ride id, dict
| "Session window" >> WindowInto(window.Sessions(3600),
trigger=trigger.Repeatedly(trigger.AfterCount(2)),
accumulation_mode=trigger.AccumulationMode.DISCARDING
)
| "Combine" >> CombinePerKey(ToListCombineFn())
| Map(first_and_last)
)
p.run()
###Output
_____no_output_____
###Markdown
*Advanced grouping and aggregations*Let's start installing and importing Beam
###Code
%pip install -q apache-beam[interactive] --no-warn-conflicts
import apache_beam as beam
from apache_beam import pvalue
from apache_beam import Create, FlatMap, Map, ParDo, Filter, Flatten
from apache_beam import CombineGlobally, CombinePerKey
from apache_beam.transforms.combiners import Top, Mean, Count
from apache_beam import pvalue, window, WindowInto
import logging
from apache_beam.runners.interactive.interactive_runner import InteractiveRunner
import apache_beam.runners.interactive.interactive_beam as ib
###Output
_____no_output_____
###Markdown
Some of the basic combiner functions are already built-in:- **`Count`** takes a `PCollection` and outputs the amount of elements. - **`Top`** outputs the *n* largest/smallest of a `PCollection` given a comparison. - **`Mean`** outputs the arithmetic mean of a `PCollection`.Combiners can aggregate using the whole `PCollection` or by key using methods:- **`.Globally`** applies the combiner to the whole `PCollection`.- **`.PerKey`** applies the combiner for each key-value in the `Pcollection`.
###Code
p = beam.Pipeline(InteractiveRunner())
elements = [
{"country": "China", "population": 1389, "continent": "Asia"},
{"country": "India", "population": 1311, "continent": "Asia"},
{"country": "Japan", "population": 126, "continent": "Asia"},
{"country": "USA", "population": 331, "continent": "America"},
{"country": "Ireland", "population": 5, "continent": "Europe"},
{"country": "Indonesia", "population": 273, "continent": "Asia"},
{"country": "Brazil", "population": 212, "continent": "America"},
{"country": "Egypt", "population": 102, "continent": "Africa"},
{"country": "Spain", "population": 47, "continent": "Europe"},
{"country": "Ghana", "population": 31, "continent": "Africa"},
{"country": "Australia", "population": 25, "continent": "Oceania"},
]
create = (p | "Create" >> Create(elements)
| "Map Keys" >> Map(lambda x: (x['continent'], x['population'])))
element_count_total = create | "Total Count" >> Count.Globally()
element_count_grouped = create | "Count Per Key" >> Count.PerKey()
top_grouped = create | "Top" >> Top.PerKey(n=2) # We get the top 2
mean_grouped = create | "Mean" >> Mean.PerKey()
ib.show_graph(p)
ib.show(element_count_total, element_count_grouped, top_grouped, mean_grouped)
###Output
_____no_output_____
###Markdown
We can also create our own **Combiners** and apply them both `Globally` and `PerKey`
###Code
p = beam.Pipeline(InteractiveRunner())
elements = ["Lorem ipsum dolor sit amet. Consectetur adipiscing elit",
"Sed eu velit nec sem vulputate loborti",
"In lobortis augue vitae sagittis molestie. Mauris volutpat tortor non purus elementum",
"Ut blandit massa et risus sollicitudin auctor"]
combine = (p | "Create" >> Create(elements)
| "Join" >> CombineGlobally(lambda x: ". ".join(x)))
ib.show(combine)
p = beam.Pipeline(InteractiveRunner())
elements = [
("Latin", "Lorem ipsum dolor sit amet. Consectetur adipiscing elit. Sed eu velit nec sem vulputate loborti"),
("Latin", "In lobortis augue vitae sagittis molestie. Mauris volutpat tortor non purus elementum"),
("English", "But as the riper should by time decease"),
("English", "That thereby beauty's rose might never die"),
("English", "From fairest creatures we desire increase"),
("Spanish", "tiempo que vivía un hidalgo de los de lanza en astillero, awindow_pcdarga antigua"),
("Spanish", "En un lugar de la Mancha, de cuyo nombre no quiero acordarme, no ha mucho"),
]
combine_key = (p | "Create" >> Create(elements)
| "Join By Language" >> CombinePerKey(lambda x: ". ".join(x)))
ib.show(combine_key)
###Output
_____no_output_____
###Markdown
**Combiners** also work on a window basis
###Code
p = beam.Pipeline(InteractiveRunner())
scores = [
{"player": "Marina", "score": 1000, "timestamp": 0},
{"player": "Cristina", "score": 2000, "timestamp": 10},
{"player": "Cristina", "score": 2000, "timestamp": 50},
{"player": "Marina", "score": 3000, "timestamp": 110},
{"player": "Juan", "score": 2000, "timestamp": 90},
{"player": "Cristina", "score": 2000, "timestamp": 80},
{"player": "Juan", "score": 1000, "timestamp": 100},
]
create = (p | "Create" >> Create(scores)
| "Add timestamps" >> Map(lambda x: window.TimestampedValue(x, x["timestamp"]))
| "To KV" >> Map(lambda x: (x["player"], x["score"]))
)
windowed = create | "FixedWindow" >> WindowInto(window.FixedWindows(60))
total_key = windowed | "Total Per Key" >> CombinePerKey(sum)
ib.show(total_key, include_window_info=True)
###Output
_____no_output_____
###Markdown
When using **windows** and **global combiners** we need to add `without_defaults`. This is because the default behaviour is to return a `PCollection` of one element for empty windows.
###Code
total = (windowed | Map(lambda x: x[1])
| "Total" >> CombineGlobally(sum).without_defaults())
ib.show(total, include_window_info=True)
###Output
_____no_output_____
###Markdown
---Let's try now to create our own `Combiner`. We are going to try to make our copy of `Mean` (i.e., a `Combiner` that calculates the average).
###Code
p = beam.Pipeline(InteractiveRunner())
def average_fn(elements):
# print(elements)
list_elements = list(elements)
return sum(list_elements)/len(list_elements)
average = (p | "Create" >> Create(range(1000))
| CombineGlobally(average_fn))
ib.show(average)
###Output
_____no_output_____
###Markdown
We can see that output is wrong, the average of the first 100 non-negative integers is not 93.95. But why do we get that value?
###Code
sum(range(100)) / 100
###Output
_____no_output_____
###Markdown
We are going to need to use the combiner interface:Solution```p = beam.Pipeline(InteractiveRunner())class AverageFn(beam.CombineFn): def create_accumulator(self): sum = 0 count = 0 return sum, count def add_input(self, accumulator, input): return accumulator[0] + input, accumulator[1] + 1 def merge_accumulators(self, accumulators): sums = [x[0] for x in accumulators] counts = [x[1] for x in accumulators] return (sum(sums), sum(counts)) def extract_output(self, final_accumulator): if final_accumulator[1] != 0: return final_accumulator[0] / final_accumulator[1] else: passaverage = (p | "Create" >> Create(range(100)) | CombineGlobally(AverageFn()))ib.show(average)``` Streaming ExampleWe'll see this in Dataflow
###Code
p = beam.Pipeline(DataflowRunner(), options)
topic = "projects/pubsub-public-data/topics/taxirides-realtime"
def first_and_last(element):
key = element[0]
dictionaries = element[1]
output_row = {}
output_row["ride_id"] = key
if len(dictionaries) == 2:
for row in dictionaries:
if row["ride_status"] == "dropoff":
output_row["dropoff"] = row["timestamp"]
if row["ride_status"] == "pickup":
output_row["pickup"] = row["timestamp"]
logging.info(f"Final row {output_row}")
return output_row
else:
logging.warning(f"Length was {len(dictionaries)}")
pass
pubsub = (p | "Read Topic" >> ReadFromPubSub(topic=topic)
| "Json Loads" >> Map(json.loads)
| "Filter" >> Filter(lambda x: x["ride_status"] != "enroute")
| "Parse" >> Map(lambda x: (x["ride_id"], {"ride_status": x["ride_status"], "timestamp": x["timestamp"]})) # KV of ride id, dict
| "Session window" >> WindowInto(window.Sessions(3600),
trigger=trigger.Repeatedly(trigger.AfterCount(2)),
accumulation_mode=trigger.AccumulationMode.DISCARDING
)
| "Combine" >> CombinePerKey(ToListCombineFn())
| Map(first_and_last)
)
p.run()
###Output
_____no_output_____ |
sentiment-analysis-network/Sentiment_Classification_Solutions.ipynb | ###Markdown
Sentiment Classification & How To "Frame Problems" for a Neural Networkby Andrew Trask- **Twitter**: @iamtrask- **Blog**: http://iamtrask.github.io What You Should Already Know- neural networks, forward and back-propagation- stochastic gradient descent- mean squared error- and train/test splits Where to Get Help if You Need it- Re-watch previous Udacity Lectures- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)- Shoot me a tweet @iamtrask Tutorial Outline:- Intro: The Importance of "Framing a Problem" (this lesson)- [Curate a Dataset](lesson_1)- [Developing a "Predictive Theory"](lesson_2)- [**PROJECT 1**: Quick Theory Validation](project_1)- [Transforming Text to Numbers](lesson_3)- [**PROJECT 2**: Creating the Input/Output Data](project_2)- Putting it all together in a Neural Network (video only - nothing in notebook)- [**PROJECT 3**: Building our Neural Network](project_3)- [Understanding Neural Noise](lesson_4)- [**PROJECT 4**: Making Learning Faster by Reducing Noise](project_4)- [Analyzing Inefficiencies in our Network](lesson_5)- [**PROJECT 5**: Making our Network Train and Run Faster](project_5)- [Further Noise Reduction](lesson_6)- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](project_6)- [Analysis: What's going on in the weights?](lesson_7) Lesson: Curate a Dataset
###Code
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
###Output
_____no_output_____
###Markdown
**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.
###Code
len(reviews)
reviews[0]
labels[0]
###Output
_____no_output_____
###Markdown
Lesson: Develop a Predictive Theory
###Code
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
###Output
_____no_output_____
###Markdown
Project 1: Quick Theory ValidationThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.You'll find the [Counter](https://docs.python.org/2/library/collections.htmlcollections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.
###Code
from collections import Counter
import numpy as np
###Output
_____no_output_____
###Markdown
We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
###Code
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
###Output
_____no_output_____
###Markdown
**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.
###Code
# Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
###Output
_____no_output_____
###Markdown
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
###Code
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
###Output
_____no_output_____
###Markdown
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`. >Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
###Code
pos_neg_ratios = Counter()
# Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
###Output
_____no_output_____
###Markdown
Examine the ratios you've calculated for a few words:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
_____no_output_____
###Markdown
Looking closely at the values you just calculated, we see the following: * Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.* Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.* When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms.**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
###Code
# Convert ratios to logs
for word,ratio in pos_neg_ratios.most_common():
pos_neg_ratios[word] = np.log(ratio)
###Output
_____no_output_____
###Markdown
**NOTE:** In the video, Andrew uses the following formulas for the previous cell:> * For any postive words, convert the ratio using `np.log(ratio)`> * For any negative words, convert the ratio using `-np.log(1/(ratio + 0.01))`These won't give you the exact same results as the simpler code we show in this notebook, but the values will be similar. In case that second equation looks strange, here's what it's doing: First, it divides one by a very small number, which will produce a larger positive number. Then, it takes the `log` of that, which produces numbers similar to the ones for the postive words. Finally, it negates the values by adding that minus sign up front. The results are extremely positive and extremely negative words having positive-to-negative ratios with similar magnitudes but oppositite signs, just like when we use `np.log(ratio)`. Examine the new ratios you've calculated for the same words from before:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
_____no_output_____
###Markdown
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)You should continue to see values similar to the earlier ones we checked – neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.
###Code
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
###Output
_____no_output_____
###Markdown
End of Project 1. Watch the next video to continue with Andrew's next lesson. Transforming Text into Numbers
###Code
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
###Output
_____no_output_____
###Markdown
Project 2: Creating the Input/Output Data**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.htmlsets) named `vocab` that contains every word in the vocabulary.
###Code
vocab = set(total_counts.keys())
###Output
_____no_output_____
###Markdown
Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**
###Code
vocab_size = len(vocab)
print(vocab_size)
###Output
74074
###Markdown
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.
###Code
from IPython.display import Image
Image(filename='sentiment_network_2.png')
###Output
_____no_output_____
###Markdown
**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns.
###Code
layer_0 = np.zeros((1,vocab_size))
###Output
_____no_output_____
###Markdown
Run the following cell. It should display `(1, 74074)`
###Code
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
###Output
_____no_output_____
###Markdown
`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
###Code
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
len(word2index)
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `update_input_layer`. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside `layer_0`.
###Code
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
# count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
###Output
_____no_output_____
###Markdown
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`.
###Code
update_input_layer(reviews[0])
layer_0
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`, depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.
###Code
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
if(label == 'POSITIVE'):
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.
###Code
labels[0]
get_target_for_label(labels[0])
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.
###Code
labels[1]
get_target_for_label(labels[1])
###Output
_____no_output_____
###Markdown
End of Project 2 solution. Watch the next video to continue with Andrew's next lesson. Project 3: Building a Neural Network **TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions- Ensure `train` trains over the entire corpus Where to Get Help if You Need it- Re-watch previous week's Udacity Lectures- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
###Output
_____no_output_____
###Markdown
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). **We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):437.8 #Correct:500 #Tested:1000 Testing Accuracy:50.0%
###Markdown
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
###Code
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):63.97 #Correct:1251 #Trained:2501 Training Accuracy:50.0%
Progress:20.8% Speed(reviews/sec):62.45 #Correct:2501 #Trained:5001 Training Accuracy:50.0%
Progress:31.2% Speed(reviews/sec):62.81 #Correct:3751 #Trained:7501 Training Accuracy:50.0%
Progress:38.1% Speed(reviews/sec):62.98 #Correct:4584 #Trained:9167 Training Accuracy:50.0%
###Markdown
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:6.85% Speed(reviews/sec):61.20 #Correct:821 #Trained:1646 Training Accuracy:49.8%
###Markdown
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):45.04 #Correct:1255 #Trained:2501 Training Accuracy:50.1%
Progress:11.7% Speed(reviews/sec):46.00 #Correct:1409 #Trained:2810 Training Accuracy:50.1%
###Markdown
With a learning rate of `0.001`, the network should finally have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to continue with Andrew's next lesson. Understanding Neural Noise
###Code
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
###Output
_____no_output_____
###Markdown
Project 4: Reducing Noise in Our Input Data**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:* Copy the `SentimentNetwork` class you created earlier into the following cell.* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 4"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
## New for Project 4: changed to set to 1 instead of add 1
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 4 solution. Watch the next video to continue with Andrew's next lesson. Analyzing Inefficiencies in our Network
###Code
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
###Output
_____no_output_____
###Markdown
Project 5: Making our Network More Efficient**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Remove the `update_input_layer` function - you will not need it in this version.* Modify `init_network`:>* You no longer need a separate input layer, so remove any mention of `self.layer_0`>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero* Modify `train`:>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.>* Remove call to `update_input_layer`>* Use `self`'s `layer_1` instead of a local `layer_1` object.>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.* Modify `run`:>* Remove call to `update_input_layer` >* Use `self`'s `layer_1` instead of a local `layer_1` object.>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 5"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it once again.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
###Markdown
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 5 solution. Watch the next video to continue with Andrew's next lesson. Further Noise Reduction
###Code
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
###Output
_____no_output_____
###Markdown
Project 6: Reducing Noise by Strategically Reducing the Vocabulary**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Modify `pre_process_data`:>* Add two additional parameters: `min_count` and `polarity_cutoff`>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. >* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`* Modify `__init__`:>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data` The following code is the same as the previous project, with project-specific changes marked with `"New for Project 6"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
## New for Project 6: added min_count and polarity_cutoff parameters
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count(int) - Words should only be added to the vocabulary
if they occur more than this many times
polarity_cutoff(float) - The absolute value of a word's positive-to-negative
ratio must be at least this big to be considered.
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
## New for Project 6: added min_count and polarity_cutoff parameters
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
#
## end New for Project 6
## ----------------------------------------
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a small polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a much larger polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 6 solution. Watch the next video to continue with Andrew's next lesson. Analysis: What's Going on in the Weights?
###Code
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):6031. #Correct:822 #Tested:1000 Testing Accuracy:82.2%
###Markdown
End of Project 6 solution. Watch the next video to continue with Andrew's next lesson. Analysis: What's Going on in the Weights?
###Code
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
###Output
/anaconda3/envs/sentiment/lib/python3.6/site-packages/bokeh/util/deprecation.py:34: BokehDeprecationWarning:
Supplying a user-defined data source AND iterable values to glyph methods is deprecated.
See https://github.com/bokeh/bokeh/issues/2056 for more information.
warn(message)
/anaconda3/envs/sentiment/lib/python3.6/site-packages/bokeh/util/deprecation.py:34: BokehDeprecationWarning:
Supplying a user-defined data source AND iterable values to glyph methods is deprecated.
See https://github.com/bokeh/bokeh/issues/2056 for more information.
warn(message)
###Markdown
Sentiment Classification & How To "Frame Problems" for a Neural Networkby Andrew Trask- **Twitter**: @iamtrask- **Blog**: http://iamtrask.github.io What You Should Already Know- neural networks, forward and back-propagation- stochastic gradient descent- mean squared error- and train/test splits Where to Get Help if You Need it- Re-watch previous Udacity Lectures- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)- Shoot me a tweet @iamtrask Tutorial Outline:- Intro: The Importance of "Framing a Problem" (this lesson)- [Curate a Dataset](lesson_1)- [Developing a "Predictive Theory"](lesson_2)- [**PROJECT 1**: Quick Theory Validation](project_1)- [Transforming Text to Numbers](lesson_3)- [**PROJECT 2**: Creating the Input/Output Data](project_2)- Putting it all together in a Neural Network (video only - nothing in notebook)- [**PROJECT 3**: Building our Neural Network](project_3)- [Understanding Neural Noise](lesson_4)- [**PROJECT 4**: Making Learning Faster by Reducing Noise](project_4)- [Analyzing Inefficiencies in our Network](lesson_5)- [**PROJECT 5**: Making our Network Train and Run Faster](project_5)- [Further Noise Reduction](lesson_6)- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](project_6)- [Analysis: What's going on in the weights?](lesson_7) Lesson: Curate a Dataset
###Code
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
###Output
_____no_output_____
###Markdown
**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.
###Code
len(reviews)
reviews[0]
labels[0]
###Output
_____no_output_____
###Markdown
Lesson: Develop a Predictive Theory
###Code
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
###Output
labels.txt : reviews.txt
NEGATIVE : this movie is terrible but it has some good effects . ...
POSITIVE : adrian pasdar is excellent is this film . he makes a fascinating woman . ...
NEGATIVE : comment this movie is impossible . is terrible very improbable bad interpretat...
POSITIVE : excellent episode movie ala pulp fiction . days suicides . it doesnt get more...
NEGATIVE : if you haven t seen this it s terrible . it is pure trash . i saw this about ...
POSITIVE : this schiffer guy is a real genius the movie is of excellent quality and both e...
###Markdown
Project 1: Quick Theory ValidationThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.You'll find the [Counter](https://docs.python.org/2/library/collections.htmlcollections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.
###Code
from collections import Counter
import numpy as np
###Output
_____no_output_____
###Markdown
We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
###Code
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
###Output
_____no_output_____
###Markdown
**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.
###Code
# Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
###Output
_____no_output_____
###Markdown
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
###Code
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
###Output
_____no_output_____
###Markdown
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`. >Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
###Code
pos_neg_ratios = Counter()
# Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
###Output
_____no_output_____
###Markdown
Examine the ratios you've calculated for a few words:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 1.0607993145235326
Pos-to-neg ratio for 'amazing' = 4.022813688212928
Pos-to-neg ratio for 'terrible' = 0.17744252873563218
###Markdown
Looking closely at the values you just calculated, we see the following: * Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.* Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.* When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms.**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
###Code
# Convert ratios to logs
for word,ratio in pos_neg_ratios.most_common():
pos_neg_ratios[word] = np.log(ratio)
###Output
_____no_output_____
###Markdown
**NOTE:** In the video, Andrew uses the following formulas for the previous cell:> * For any postive words, convert the ratio using `np.log(ratio)`> * For any negative words, convert the ratio using `-np.log(1/(ratio + 0.01))`These won't give you the exact same results as the simpler code we show in this notebook, but the values will be similar. In case that second equation looks strange, here's what it's doing: First, it divides one by a very small number, which will produce a larger positive number. Then, it takes the `log` of that, which produces numbers similar to the ones for the postive words. Finally, it negates the values by adding that minus sign up front. The results are extremely positive and extremely negative words having positive-to-negative ratios with similar magnitudes but oppositite signs, just like when we use `np.log(ratio)`. Examine the new ratios you've calculated for the same words from before:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 0.05902269426102881
Pos-to-neg ratio for 'amazing' = 1.3919815802404802
Pos-to-neg ratio for 'terrible' = -1.7291085042663878
###Markdown
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)You should continue to see values similar to the earlier ones we checked – neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.
###Code
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
###Output
_____no_output_____
###Markdown
End of Project 1. Watch the next video to continue with Andrew's next lesson. Transforming Text into Numbers
###Code
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
###Output
_____no_output_____
###Markdown
Project 2: Creating the Input/Output Data**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.htmlsets) named `vocab` that contains every word in the vocabulary.
###Code
vocab = set(total_counts.keys())
###Output
_____no_output_____
###Markdown
Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**
###Code
vocab_size = len(vocab)
print(vocab_size)
###Output
74074
###Markdown
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.
###Code
from IPython.display import Image
Image(filename='sentiment_network_2.png')
###Output
_____no_output_____
###Markdown
**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns.
###Code
layer_0 = np.zeros((1,vocab_size))
###Output
_____no_output_____
###Markdown
Run the following cell. It should display `(1, 74074)`
###Code
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
###Output
_____no_output_____
###Markdown
`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
###Code
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `update_input_layer`. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside `layer_0`.
###Code
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
# count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
###Output
_____no_output_____
###Markdown
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`.
###Code
update_input_layer(reviews[0])
layer_0
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`, depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.
###Code
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
if(label == 'POSITIVE'):
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.
###Code
labels[0]
get_target_for_label(labels[0])
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.
###Code
labels[1]
get_target_for_label(labels[1])
###Output
_____no_output_____
###Markdown
End of Project 2 solution. Watch the next video to continue with Andrew's next lesson. Project 3: Building a Neural Network **TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions- Ensure `train` trains over the entire corpus Where to Get Help if You Need it- Re-watch previous week's Udacity Lectures- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)
###Code
import time
import sys
import numpy as np
from collections import Counter
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
self.review_vocab = list(review_vocab)
label_vocab = set()
for label in labels:
label_vocab.add(label)
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for n,word in enumerate(self.review_vocab):
self.word2index[word]=n
self.review_vocab2 = list(self.word2index.keys)
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for n,label in enumerate(self.label_vocab):
self.label2index[label] = n
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
#self.weights_1_2 = np.zeros((self.hidden_nodes,self.output_nodes))
self.weights_1_2 = np.random.normal(0.0,self.output_nodes**(-0.5),(self.hidden_nodes,self.output_nodes))
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
self.layer_0 *= 0
for word in review.split(" "):
if word in self.review_vocab:
self.layer_0[0,self.word2index[word]]+=1
#
#
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
#pass
#
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
#pass
r = 1
if label == "NEGATIVE":
r=0
return r
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
#pass
return 1/(1+np.exp(-x))
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
#pass
return output * (1 - output)
def train(self, training_reviews, training_labels):
#
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
#
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
#
# Remember when we started for printing time statistics
start = time.time()
#
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
#
# TODO: Get the next review and its correct label
training_review = training_reviews[i]
training_label = training_labels[i]
#
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
#Input Layer
self.update_input_layer(training_review)
#input_glob = self.layer_0
#Input to Hidden Layer
#W_0_1 = self.weights_0_1
h_in = self.layer_0.dot(self.weights_0_1)
#Output of Hidden Layer
h_out = h_in
#Input to Output Layer
#W_1_2 = self.weights_1_2
y_in = h_in.dot(self.weights_1_2)
#Output
y_out = self.sigmoid(y_in)
#Output label
y = self.get_target_for_label(training_label)
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
error_glob = (y-y_out)
error_term_output = error_glob * self.sigmoid_output_2_derivative(y_out)
error_term_hidden = self.weights_1_2.dot(error_term_output)
#
#
self.weights_0_1 += self.learning_rate * self.layer_0.T.dot(error_term_hidden.T)
self.weights_1_2 += self.learning_rate * error_term_output * h_out.T
if np.abs(error_glob)<0.5:
correct_so_far += 1
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
#review_run = review.lower()
#Input Layer
self.update_input_layer(review.lower())
#input_glob = self.layer_0
#Input to Hidden Layer
#W_0_1 = self.weights_0_1
#h_in = np.dot(input_glob,W_0_1)
h_in = self.layer_0.dot(self.weights_0_1)
#Output of Hidden Layer
#h_out = h_in
#Input to Output Layer
#W_1_2 = self.weights_1_2
#y_in = np.dot(h_in,W_1_2)
#y_in = h_in.dot(self.weights_1_2)
#Output
y_out = self.sigmoid(h_in.dot(self.weights_1_2))
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
output_pred = "POSITIVE"
if y_out < 0.5:
output_pred = "NEGATIV"
return output_pred
###Output
_____no_output_____
###Markdown
Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
###Output
_____no_output_____
###Markdown
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). **We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:6.3% Speed(reviews/sec):2.851 #Correct:32 #Tested:64 Testing Accuracy:50.0%
###Markdown
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
###Code
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:0 #Trained:1 Training Accuracy:0.0%
Progress:10.4% Speed(reviews/sec):2.573 #Correct:1250 #Trained:2501 Training Accuracy:49.9%
Progress:19.8% Speed(reviews/sec):2.659 #Correct:2379 #Trained:4760 Training Accuracy:49.9%
###Markdown
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
With a learning rate of `0.001`, the network should finally have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to continue with Andrew's next lesson. Understanding Neural Noise
###Code
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
###Output
_____no_output_____
###Markdown
Project 4: Reducing Noise in Our Input Data**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:* Copy the `SentimentNetwork` class you created earlier into the following cell.* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 4"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
## New for Project 4: changed to set to 1 instead of add 1
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 4 solution. Watch the next video to continue with Andrew's next lesson. Analyzing Inefficiencies in our Network
###Code
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
###Output
_____no_output_____
###Markdown
Project 5: Making our Network More Efficient**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Remove the `update_input_layer` function - you will not need it in this version.* Modify `init_network`:>* You no longer need a separate input layer, so remove any mention of `self.layer_0`>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero* Modify `train`:>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.>* Remove call to `update_input_layer`>* Use `self`'s `layer_1` instead of a local `layer_1` object.>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.* Modify `run`:>* Remove call to `update_input_layer` >* Use `self`'s `layer_1` instead of a local `layer_1` object.>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 5"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it once again.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 5 solution. Watch the next video to continue with Andrew's next lesson. Further Noise Reduction
###Code
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
###Output
_____no_output_____
###Markdown
Project 6: Reducing Noise by Strategically Reducing the Vocabulary**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Modify `pre_process_data`:>* Add two additional parameters: `min_count` and `polarity_cutoff`>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. >* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`* Modify `__init__`:>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data` The following code is the same as the previous project, with project-specific changes marked with `"New for Project 6"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
## New for Project 6: added min_count and polarity_cutoff parameters
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count(int) - Words should only be added to the vocabulary
if they occur more than this many times
polarity_cutoff(float) - The absolute value of a word's positive-to-negative
ratio must be at least this big to be considered.
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
## New for Project 6: added min_count and polarity_cutoff parameters
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
#
## end New for Project 6
## ----------------------------------------
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a small polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a much larger polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 6 solution. Watch the next video to continue with Andrew's next lesson. Analysis: What's Going on in the Weights?
###Code
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
###Output
_____no_output_____
###Markdown
Sentiment Classification & How To "Frame Problems" for a Neural Networkby Andrew Trask- **Twitter**: @iamtrask- **Blog**: http://iamtrask.github.io What You Should Already Know- neural networks, forward and back-propagation- stochastic gradient descent- mean squared error- and train/test splits Where to Get Help if You Need it- Re-watch previous Udacity Lectures- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)- Shoot me a tweet @iamtrask Tutorial Outline:- Intro: The Importance of "Framing a Problem" (this lesson)- [Curate a Dataset](lesson_1)- [Developing a "Predictive Theory"](lesson_2)- [**PROJECT 1**: Quick Theory Validation](project_1)- [Transforming Text to Numbers](lesson_3)- [**PROJECT 2**: Creating the Input/Output Data](project_2)- Putting it all together in a Neural Network (video only - nothing in notebook)- [**PROJECT 3**: Building our Neural Network](project_3)- [Understanding Neural Noise](lesson_4)- [**PROJECT 4**: Making Learning Faster by Reducing Noise](project_4)- [Analyzing Inefficiencies in our Network](lesson_5)- [**PROJECT 5**: Making our Network Train and Run Faster](project_5)- [Further Noise Reduction](lesson_6)- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](project_6)- [Analysis: What's going on in the weights?](lesson_7) Lesson: Curate a Dataset
###Code
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
###Output
_____no_output_____
###Markdown
**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.
###Code
len(reviews)
reviews[0]
labels[0]
###Output
_____no_output_____
###Markdown
Lesson: Develop a Predictive Theory
###Code
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
###Output
labels.txt : reviews.txt
NEGATIVE : this movie is terrible but it has some good effects . ...
POSITIVE : adrian pasdar is excellent is this film . he makes a fascinating woman . ...
NEGATIVE : comment this movie is impossible . is terrible very improbable bad interpretat...
POSITIVE : excellent episode movie ala pulp fiction . days suicides . it doesnt get more...
NEGATIVE : if you haven t seen this it s terrible . it is pure trash . i saw this about ...
POSITIVE : this schiffer guy is a real genius the movie is of excellent quality and both e...
###Markdown
Project 1: Quick Theory ValidationThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.You'll find the [Counter](https://docs.python.org/2/library/collections.htmlcollections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.
###Code
from collections import Counter
import numpy as np
###Output
_____no_output_____
###Markdown
We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
###Code
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
###Output
_____no_output_____
###Markdown
**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.
###Code
# Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
###Output
_____no_output_____
###Markdown
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
###Code
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
###Output
_____no_output_____
###Markdown
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`. >Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
###Code
pos_neg_ratios = Counter()
# Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
###Output
_____no_output_____
###Markdown
Examine the ratios you've calculated for a few words:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 1.0607993145235326
Pos-to-neg ratio for 'amazing' = 4.022813688212928
Pos-to-neg ratio for 'terrible' = 0.17744252873563218
###Markdown
Looking closely at the values you just calculated, we see the following: * Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.* Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.* When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms.**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
###Code
# Convert ratios to logs
for word,ratio in pos_neg_ratios.most_common():
pos_neg_ratios[word] = np.log(ratio)
###Output
_____no_output_____
###Markdown
**NOTE:** In the video, Andrew uses the following formulas for the previous cell:> * For any postive words, convert the ratio using `np.log(ratio)`> * For any negative words, convert the ratio using `-np.log(1/(ratio + 0.01))`These won't give you the exact same results as the simpler code we show in this notebook, but the values will be similar. In case that second equation looks strange, here's what it's doing: First, it divides one by a very small number, which will produce a larger positive number. Then, it takes the `log` of that, which produces numbers similar to the ones for the postive words. Finally, it negates the values by adding that minus sign up front. The results are extremely positive and extremely negative words having positive-to-negative ratios with similar magnitudes but oppositite signs, just like when we use `np.log(ratio)`. Examine the new ratios you've calculated for the same words from before:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 0.05902269426102881
Pos-to-neg ratio for 'amazing' = 1.3919815802404802
Pos-to-neg ratio for 'terrible' = -1.7291085042663878
###Markdown
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)You should continue to see values similar to the earlier ones we checked – neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.
###Code
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
###Output
_____no_output_____
###Markdown
End of Project 1. Watch the next video to continue with Andrew's next lesson. Transforming Text into Numbers
###Code
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
###Output
_____no_output_____
###Markdown
Project 2: Creating the Input/Output Data**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.htmlsets) named `vocab` that contains every word in the vocabulary.
###Code
vocab = set(total_counts.keys())
###Output
_____no_output_____
###Markdown
Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**
###Code
vocab_size = len(vocab)
print(vocab_size)
###Output
74074
###Markdown
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.
###Code
from IPython.display import Image
Image(filename='sentiment_network_2.png')
###Output
_____no_output_____
###Markdown
**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns.
###Code
layer_0 = np.zeros((1,vocab_size))
###Output
_____no_output_____
###Markdown
Run the following cell. It should display `(1, 74074)`
###Code
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
###Output
_____no_output_____
###Markdown
`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
###Code
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `update_input_layer`. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside `layer_0`.
###Code
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
# count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
###Output
_____no_output_____
###Markdown
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`.
###Code
update_input_layer(reviews[0])
layer_0
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`, depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.
###Code
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
if(label == 'POSITIVE'):
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.
###Code
labels[0]
get_target_for_label(labels[0])
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.
###Code
labels[1]
get_target_for_label(labels[1])
###Output
_____no_output_____
###Markdown
End of Project 2 solution. Watch the next video to continue with Andrew's next lesson. Project 3: Building a Neural Network **TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions- Ensure `train` trains over the entire corpus Where to Get Help if You Need it- Re-watch previous week's Udacity Lectures- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
###Output
_____no_output_____
###Markdown
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). **We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):1065. #Correct:500 #Tested:1000 Testing Accuracy:50.0%
###Markdown
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
###Code
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):227.4 #Correct:1251 #Trained:2501 Training Accuracy:50.0%
Progress:20.8% Speed(reviews/sec):223.8 #Correct:2501 #Trained:5001 Training Accuracy:50.0%
Progress:31.2% Speed(reviews/sec):215.9 #Correct:3751 #Trained:7501 Training Accuracy:50.0%
Progress:41.6% Speed(reviews/sec):219.7 #Correct:5001 #Trained:10001 Training Accuracy:50.0%
Progress:52.0% Speed(reviews/sec):218.9 #Correct:6251 #Trained:12501 Training Accuracy:50.0%
Progress:62.5% Speed(reviews/sec):218.2 #Correct:7501 #Trained:15001 Training Accuracy:50.0%
Progress:72.9% Speed(reviews/sec):218.8 #Correct:8751 #Trained:17501 Training Accuracy:50.0%
Progress:83.3% Speed(reviews/sec):219.6 #Correct:10001 #Trained:20001 Training Accuracy:50.0%
Progress:93.7% Speed(reviews/sec):219.2 #Correct:11251 #Trained:22501 Training Accuracy:50.0%
Progress:99.9% Speed(reviews/sec):219.0 #Correct:12000 #Trained:24000 Training Accuracy:50.0%
###Markdown
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):168.5 #Correct:1248 #Trained:2501 Training Accuracy:49.9%
Progress:20.8% Speed(reviews/sec):165.7 #Correct:2498 #Trained:5001 Training Accuracy:49.9%
Progress:31.2% Speed(reviews/sec):164.7 #Correct:3748 #Trained:7501 Training Accuracy:49.9%
Progress:41.6% Speed(reviews/sec):163.9 #Correct:4998 #Trained:10001 Training Accuracy:49.9%
Progress:52.0% Speed(reviews/sec):163.4 #Correct:6248 #Trained:12501 Training Accuracy:49.9%
Progress:62.5% Speed(reviews/sec):162.9 #Correct:7491 #Trained:15001 Training Accuracy:49.9%
Progress:72.9% Speed(reviews/sec):162.1 #Correct:8741 #Trained:17501 Training Accuracy:49.9%
Progress:83.3% Speed(reviews/sec):161.8 #Correct:9991 #Trained:20001 Training Accuracy:49.9%
Progress:93.7% Speed(reviews/sec):161.4 #Correct:11241 #Trained:22501 Training Accuracy:49.9%
Progress:99.9% Speed(reviews/sec):161.3 #Correct:11990 #Trained:24000 Training Accuracy:49.9%
###Markdown
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):170.6 #Correct:1256 #Trained:2501 Training Accuracy:50.2%
Progress:20.8% Speed(reviews/sec):165.1 #Correct:2639 #Trained:5001 Training Accuracy:52.7%
Progress:31.2% Speed(reviews/sec):164.5 #Correct:4110 #Trained:7501 Training Accuracy:54.7%
Progress:41.6% Speed(reviews/sec):162.3 #Correct:5674 #Trained:10001 Training Accuracy:56.7%
Progress:52.0% Speed(reviews/sec):161.7 #Correct:7251 #Trained:12501 Training Accuracy:58.0%
Progress:62.5% Speed(reviews/sec):160.9 #Correct:8872 #Trained:15001 Training Accuracy:59.1%
Progress:72.9% Speed(reviews/sec):160.3 #Correct:10509 #Trained:17501 Training Accuracy:60.0%
Progress:83.3% Speed(reviews/sec):160.2 #Correct:12218 #Trained:20001 Training Accuracy:61.0%
Progress:93.7% Speed(reviews/sec):160.1 #Correct:13868 #Trained:22501 Training Accuracy:61.6%
Progress:99.9% Speed(reviews/sec):160.2 #Correct:14942 #Trained:24000 Training Accuracy:62.2%
###Markdown
With a learning rate of `0.001`, the network should finally have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to continue with Andrew's next lesson. Understanding Neural Noise
###Code
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
###Output
_____no_output_____
###Markdown
Project 4: Reducing Noise in Our Input Data**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:* Copy the `SentimentNetwork` class you created earlier into the following cell.* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 4"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
## New for Project 4: changed to set to 1 instead of add 1
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):1280. #Correct:857 #Tested:1000 Testing Accuracy:85.7%
###Markdown
End of Project 4 solution. Watch the next video to continue with Andrew's next lesson. Analyzing Inefficiencies in our Network
###Code
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
###Output
_____no_output_____
###Markdown
Project 5: Making our Network More Efficient**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Remove the `update_input_layer` function - you will not need it in this version.* Modify `init_network`:>* You no longer need a separate input layer, so remove any mention of `self.layer_0`>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero* Modify `train`:>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.>* Remove call to `update_input_layer`>* Use `self`'s `layer_1` instead of a local `layer_1` object.>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.* Modify `run`:>* Remove call to `update_input_layer` >* Use `self`'s `layer_1` instead of a local `layer_1` object.>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 5"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it once again.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):1166. #Correct:1694 #Trained:2501 Training Accuracy:67.7%
Progress:20.8% Speed(reviews/sec):1143. #Correct:3678 #Trained:5001 Training Accuracy:73.5%
Progress:31.2% Speed(reviews/sec):1142. #Correct:5752 #Trained:7501 Training Accuracy:76.6%
Progress:41.6% Speed(reviews/sec):1150. #Correct:7880 #Trained:10001 Training Accuracy:78.7%
Progress:52.0% Speed(reviews/sec):1145. #Correct:10014 #Trained:12501 Training Accuracy:80.1%
Progress:62.5% Speed(reviews/sec):1146. #Correct:12143 #Trained:15001 Training Accuracy:80.9%
Progress:72.9% Speed(reviews/sec):1145. #Correct:14269 #Trained:17501 Training Accuracy:81.5%
Progress:83.3% Speed(reviews/sec):1139. #Correct:16449 #Trained:20001 Training Accuracy:82.2%
Progress:93.7% Speed(reviews/sec):1132. #Correct:18629 #Trained:22501 Training Accuracy:82.7%
Progress:99.9% Speed(reviews/sec):1130. #Correct:19945 #Trained:24000 Training Accuracy:83.1%
###Markdown
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):1623. #Correct:846 #Tested:1000 Testing Accuracy:84.6%
###Markdown
End of Project 5 solution. Watch the next video to continue with Andrew's next lesson. Further Noise Reduction
###Code
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
###Output
_____no_output_____
###Markdown
Project 6: Reducing Noise by Strategically Reducing the Vocabulary**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Modify `pre_process_data`:>* Add two additional parameters: `min_count` and `polarity_cutoff`>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. >* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`* Modify `__init__`:>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data` The following code is the same as the previous project, with project-specific changes marked with `"New for Project 6"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
## New for Project 6: added min_count and polarity_cutoff parameters
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count(int) - Words should only be added to the vocabulary
if they occur more than this many times
polarity_cutoff(float) - The absolute value of a word's positive-to-negative
ratio must be at least this big to be considered.
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
## New for Project 6: added min_count and polarity_cutoff parameters
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
#
## end New for Project 6
## ----------------------------------------
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a small polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):1307. #Correct:1994 #Trained:2501 Training Accuracy:79.7%
Progress:20.8% Speed(reviews/sec):1281. #Correct:4063 #Trained:5001 Training Accuracy:81.2%
Progress:31.2% Speed(reviews/sec):1284. #Correct:6176 #Trained:7501 Training Accuracy:82.3%
Progress:41.6% Speed(reviews/sec):1288. #Correct:8336 #Trained:10001 Training Accuracy:83.3%
Progress:52.0% Speed(reviews/sec):1286. #Correct:10501 #Trained:12501 Training Accuracy:84.0%
Progress:62.5% Speed(reviews/sec):1287. #Correct:12641 #Trained:15001 Training Accuracy:84.2%
Progress:72.9% Speed(reviews/sec):1283. #Correct:14782 #Trained:17501 Training Accuracy:84.4%
Progress:83.3% Speed(reviews/sec):1279. #Correct:16954 #Trained:20001 Training Accuracy:84.7%
Progress:93.7% Speed(reviews/sec):1276. #Correct:19143 #Trained:22501 Training Accuracy:85.0%
Progress:99.9% Speed(reviews/sec):1275. #Correct:20461 #Trained:24000 Training Accuracy:85.2%
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):1903. #Correct:859 #Tested:1000 Testing Accuracy:85.9%
###Markdown
Run the following cell to train your network with a much larger polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):6770. #Correct:2114 #Trained:2501 Training Accuracy:84.5%
Progress:20.8% Speed(reviews/sec):6416. #Correct:4235 #Trained:5001 Training Accuracy:84.6%
Progress:31.2% Speed(reviews/sec):6389. #Correct:6362 #Trained:7501 Training Accuracy:84.8%
Progress:41.6% Speed(reviews/sec):6406. #Correct:8513 #Trained:10001 Training Accuracy:85.1%
Progress:52.0% Speed(reviews/sec):6447. #Correct:10641 #Trained:12501 Training Accuracy:85.1%
Progress:62.5% Speed(reviews/sec):6367. #Correct:12796 #Trained:15001 Training Accuracy:85.3%
Progress:72.9% Speed(reviews/sec):6376. #Correct:14911 #Trained:17501 Training Accuracy:85.2%
Progress:83.3% Speed(reviews/sec):6405. #Correct:17077 #Trained:20001 Training Accuracy:85.3%
Progress:93.7% Speed(reviews/sec):6403. #Correct:19258 #Trained:22501 Training Accuracy:85.5%
Progress:99.9% Speed(reviews/sec):6424. #Correct:20552 #Trained:24000 Training Accuracy:85.6%
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):6031. #Correct:822 #Tested:1000 Testing Accuracy:82.2%
###Markdown
End of Project 6 solution. Watch the next video to continue with Andrew's next lesson. Analysis: What's Going on in the Weights?
###Code
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
###Output
/anaconda3/envs/sentiment/lib/python3.6/site-packages/bokeh/util/deprecation.py:34: BokehDeprecationWarning:
Supplying a user-defined data source AND iterable values to glyph methods is deprecated.
See https://github.com/bokeh/bokeh/issues/2056 for more information.
warn(message)
/anaconda3/envs/sentiment/lib/python3.6/site-packages/bokeh/util/deprecation.py:34: BokehDeprecationWarning:
Supplying a user-defined data source AND iterable values to glyph methods is deprecated.
See https://github.com/bokeh/bokeh/issues/2056 for more information.
warn(message)
###Markdown
Sentiment Classification & How To "Frame Problems" for a Neural Networkby Andrew Trask- **Twitter**: @iamtrask- **Blog**: http://iamtrask.github.io What You Should Already Know- neural networks, forward and back-propagation- stochastic gradient descent- mean squared error- and train/test splits Where to Get Help if You Need it- Re-watch previous Udacity Lectures- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)- Shoot me a tweet @iamtrask Tutorial Outline:- Intro: The Importance of "Framing a Problem" (this lesson)- [Curate a Dataset](lesson_1)- [Developing a "Predictive Theory"](lesson_2)- [**PROJECT 1**: Quick Theory Validation](project_1)- [Transforming Text to Numbers](lesson_3)- [**PROJECT 2**: Creating the Input/Output Data](project_2)- Putting it all together in a Neural Network (video only - nothing in notebook)- [**PROJECT 3**: Building our Neural Network](project_3)- [Understanding Neural Noise](lesson_4)- [**PROJECT 4**: Making Learning Faster by Reducing Noise](project_4)- [Analyzing Inefficiencies in our Network](lesson_5)- [**PROJECT 5**: Making our Network Train and Run Faster](project_5)- [Further Noise Reduction](lesson_6)- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](project_6)- [Analysis: What's going on in the weights?](lesson_7) Lesson: Curate a Dataset
###Code
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
###Output
_____no_output_____
###Markdown
**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.
###Code
len(reviews)
reviews[0]
labels[0]
###Output
_____no_output_____
###Markdown
Lesson: Develop a Predictive Theory
###Code
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
###Output
labels.txt : reviews.txt
NEGATIVE : this movie is terrible but it has some good effects . ...
POSITIVE : adrian pasdar is excellent is this film . he makes a fascinating woman . ...
NEGATIVE : comment this movie is impossible . is terrible very improbable bad interpretat...
POSITIVE : excellent episode movie ala pulp fiction . days suicides . it doesnt get more...
NEGATIVE : if you haven t seen this it s terrible . it is pure trash . i saw this about ...
POSITIVE : this schiffer guy is a real genius the movie is of excellent quality and both e...
###Markdown
Project 1: Quick Theory ValidationThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.You'll find the [Counter](https://docs.python.org/2/library/collections.htmlcollections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.
###Code
from collections import Counter
import numpy as np
###Output
_____no_output_____
###Markdown
We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
###Code
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
###Output
_____no_output_____
###Markdown
**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.
###Code
# Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
###Output
_____no_output_____
###Markdown
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
###Code
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
###Output
_____no_output_____
###Markdown
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`. >Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
###Code
pos_neg_ratios = Counter()
# Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
###Output
_____no_output_____
###Markdown
Examine the ratios you've calculated for a few words:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 1.0607993145235326
Pos-to-neg ratio for 'amazing' = 4.022813688212928
Pos-to-neg ratio for 'terrible' = 0.17744252873563218
###Markdown
Looking closely at the values you just calculated, we see the following: * Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.* Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.* When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms.**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
###Code
# Convert ratios to logs
for word,ratio in pos_neg_ratios.most_common():
pos_neg_ratios[word] = np.log(ratio)
###Output
_____no_output_____
###Markdown
**NOTE:** In the video, Andrew uses the following formulas for the previous cell:> * For any postive words, convert the ratio using `np.log(ratio)`> * For any negative words, convert the ratio using `-np.log(1/(ratio + 0.01))`These won't give you the exact same results as the simpler code we show in this notebook, but the values will be similar. In case that second equation looks strange, here's what it's doing: First, it divides one by a very small number, which will produce a larger positive number. Then, it takes the `log` of that, which produces numbers similar to the ones for the postive words. Finally, it negates the values by adding that minus sign up front. The results are extremely positive and extremely negative words having positive-to-negative ratios with similar magnitudes but oppositite signs, just like when we use `np.log(ratio)`. Examine the new ratios you've calculated for the same words from before:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 0.05902269426102881
Pos-to-neg ratio for 'amazing' = 1.3919815802404802
Pos-to-neg ratio for 'terrible' = -1.7291085042663878
###Markdown
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)You should continue to see values similar to the earlier ones we checked – neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.
###Code
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
###Output
_____no_output_____
###Markdown
End of Project 1. Watch the next video to continue with Andrew's next lesson. Transforming Text into Numbers
###Code
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
###Output
_____no_output_____
###Markdown
Project 2: Creating the Input/Output Data**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.htmlsets) named `vocab` that contains every word in the vocabulary.
###Code
vocab = set(total_counts.keys())
###Output
_____no_output_____
###Markdown
Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**
###Code
vocab_size = len(vocab)
print(vocab_size)
###Output
74074
###Markdown
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.
###Code
from IPython.display import Image
Image(filename='sentiment_network_2.png')
###Output
_____no_output_____
###Markdown
**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns.
###Code
layer_0 = np.zeros((1,vocab_size))
###Output
_____no_output_____
###Markdown
Run the following cell. It should display `(1, 74074)`
###Code
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
###Output
_____no_output_____
###Markdown
`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
###Code
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `update_input_layer`. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside `layer_0`.
###Code
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
# count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
###Output
_____no_output_____
###Markdown
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`.
###Code
update_input_layer(reviews[0])
layer_0
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`, depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.
###Code
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
if(label == 'POSITIVE'):
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.
###Code
labels[0]
get_target_for_label(labels[0])
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.
###Code
labels[1]
get_target_for_label(labels[1])
###Output
_____no_output_____
###Markdown
End of Project 2 solution. Watch the next video to continue with Andrew's next lesson. Project 3: Building a Neural Network **TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions- Ensure `train` trains over the entire corpus Where to Get Help if You Need it- Re-watch previous week's Udacity Lectures- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
###Output
_____no_output_____
###Markdown
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). **We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Tested:1 Testing Accuracy:100.%
Progress:0.1% Speed(reviews/sec):55.68 #Correct:1 #Tested:2 Testing Accuracy:50.0%
Progress:0.2% Speed(reviews/sec):111.3 #Correct:2 #Tested:3 Testing Accuracy:66.6%
Progress:0.3% Speed(reviews/sec):158.2 #Correct:2 #Tested:4 Testing Accuracy:50.0%
Progress:0.4% Speed(reviews/sec):200.4 #Correct:3 #Tested:5 Testing Accuracy:60.0%
Progress:0.5% Speed(reviews/sec):238.6 #Correct:3 #Tested:6 Testing Accuracy:50.0%
Progress:0.6% Speed(reviews/sec):273.4 #Correct:4 #Tested:7 Testing Accuracy:57.1%
Progress:0.7% Speed(reviews/sec):292.3 #Correct:4 #Tested:8 Testing Accuracy:50.0%
Progress:0.8% Speed(reviews/sec):320.8 #Correct:5 #Tested:9 Testing Accuracy:55.5%
Progress:0.9% Speed(reviews/sec):334.1 #Correct:5 #Tested:10 Testing Accuracy:50.0%
Progress:1.0% Speed(reviews/sec):358.0 #Correct:6 #Tested:11 Testing Accuracy:54.5%
Progress:1.1% Speed(reviews/sec):380.2 #Correct:6 #Tested:12 Testing Accuracy:50.0%
Progress:1.2% Speed(reviews/sec):401.0 #Correct:7 #Tested:13 Testing Accuracy:53.8%
Progress:1.3% Speed(reviews/sec):420.4 #Correct:7 #Tested:14 Testing Accuracy:50.0%
Progress:1.4% Speed(reviews/sec):438.6 #Correct:8 #Tested:15 Testing Accuracy:53.3%
Progress:1.5% Speed(reviews/sec):455.7 #Correct:8 #Tested:16 Testing Accuracy:50.0%
Progress:1.6% Speed(reviews/sec):458.3 #Correct:9 #Tested:17 Testing Accuracy:52.9%
Progress:1.7% Speed(reviews/sec):460.6 #Correct:9 #Tested:18 Testing Accuracy:50.0%
Progress:1.8% Speed(reviews/sec):474.8 #Correct:10 #Tested:19 Testing Accuracy:52.6%
Progress:1.9% Speed(reviews/sec):488.4 #Correct:10 #Tested:20 Testing Accuracy:50.0%
Progress:2.0% Speed(reviews/sec):489.0 #Correct:11 #Tested:21 Testing Accuracy:52.3%
Progress:2.1% Speed(reviews/sec):501.2 #Correct:11 #Tested:22 Testing Accuracy:50.0%
Progress:2.2% Speed(reviews/sec):512.9 #Correct:12 #Tested:23 Testing Accuracy:52.1%
Progress:2.3% Speed(reviews/sec):524.0 #Correct:12 #Tested:24 Testing Accuracy:50.0%
Progress:2.4% Speed(reviews/sec):523.0 #Correct:13 #Tested:25 Testing Accuracy:52.0%
Progress:2.5% Speed(reviews/sec):522.1 #Correct:13 #Tested:26 Testing Accuracy:50.0%
Progress:2.6% Speed(reviews/sec):531.9 #Correct:14 #Tested:27 Testing Accuracy:51.8%
Progress:2.7% Speed(reviews/sec):520.5 #Correct:14 #Tested:28 Testing Accuracy:50.0%
Progress:2.8% Speed(reviews/sec):529.6 #Correct:15 #Tested:29 Testing Accuracy:51.7%
Progress:2.9% Speed(reviews/sec):538.4 #Correct:15 #Tested:30 Testing Accuracy:50.0%
Progress:3.0% Speed(reviews/sec):537.1 #Correct:16 #Tested:31 Testing Accuracy:51.6%
Progress:3.1% Speed(reviews/sec):545.2 #Correct:16 #Tested:32 Testing Accuracy:50.0%
Progress:3.2% Speed(reviews/sec):553.1 #Correct:17 #Tested:33 Testing Accuracy:51.5%
Progress:3.3% Speed(reviews/sec):560.7 #Correct:17 #Tested:34 Testing Accuracy:50.0%
Progress:3.4% Speed(reviews/sec):558.8 #Correct:18 #Tested:35 Testing Accuracy:51.4%
Progress:3.5% Speed(reviews/sec):565.9 #Correct:18 #Tested:36 Testing Accuracy:50.0%
Progress:3.6% Speed(reviews/sec):563.9 #Correct:19 #Tested:37 Testing Accuracy:51.3%
Progress:3.7% Speed(reviews/sec):579.6 #Correct:19 #Tested:38 Testing Accuracy:50.0%
Progress:3.8% Speed(reviews/sec):577.2 #Correct:20 #Tested:39 Testing Accuracy:51.2%
Progress:3.9% Speed(reviews/sec):573.2 #Correct:20 #Tested:40 Testing Accuracy:50.0%
Progress:4.0% Speed(reviews/sec):579.4 #Correct:21 #Tested:41 Testing Accuracy:51.2%
Progress:4.1% Speed(reviews/sec):585.5 #Correct:21 #Tested:42 Testing Accuracy:50.0%
Progress:4.2% Speed(reviews/sec):575.2 #Correct:22 #Tested:43 Testing Accuracy:51.1%
Progress:4.3% Speed(reviews/sec):580.9 #Correct:22 #Tested:44 Testing Accuracy:50.0%
Progress:4.4% Speed(reviews/sec):571.3 #Correct:23 #Tested:45 Testing Accuracy:51.1%
Progress:4.5% Speed(reviews/sec):562.5 #Correct:23 #Tested:46 Testing Accuracy:50.0%
Progress:4.6% Speed(reviews/sec):561.0 #Correct:24 #Tested:47 Testing Accuracy:51.0%
Progress:4.7% Speed(reviews/sec):566.3 #Correct:24 #Tested:48 Testing Accuracy:50.0%
Progress:4.8% Speed(reviews/sec):571.5 #Correct:25 #Tested:49 Testing Accuracy:51.0%
Progress:4.9% Speed(reviews/sec):576.5 #Correct:25 #Tested:50 Testing Accuracy:50.0%
Progress:5.0% Speed(reviews/sec):581.5 #Correct:26 #Tested:51 Testing Accuracy:50.9%
Progress:5.1% Speed(reviews/sec):585.3 #Correct:26 #Tested:52 Testing Accuracy:50.0%
Progress:5.2% Speed(reviews/sec):590.1 #Correct:27 #Tested:53 Testing Accuracy:50.9%
Progress:5.3% Speed(reviews/sec):594.7 #Correct:27 #Tested:54 Testing Accuracy:50.0%
Progress:5.4% Speed(reviews/sec):599.2 #Correct:28 #Tested:55 Testing Accuracy:50.9%
Progress:5.5% Speed(reviews/sec):603.7 #Correct:28 #Tested:56 Testing Accuracy:50.0%
Progress:5.6% Speed(reviews/sec):608.0 #Correct:29 #Tested:57 Testing Accuracy:50.8%
Progress:5.7% Speed(reviews/sec):612.2 #Correct:29 #Tested:58 Testing Accuracy:50.0%
Progress:5.8% Speed(reviews/sec):616.3 #Correct:30 #Tested:59 Testing Accuracy:50.8%
Progress:5.9% Speed(reviews/sec):620.4 #Correct:30 #Tested:60 Testing Accuracy:50.0%
Progress:6.0% Speed(reviews/sec):624.4 #Correct:31 #Tested:61 Testing Accuracy:50.8%
Progress:6.1% Speed(reviews/sec):621.9 #Correct:31 #Tested:62 Testing Accuracy:50.0%
Progress:6.2% Speed(reviews/sec):625.7 #Correct:32 #Tested:63 Testing Accuracy:50.7%
Progress:6.3% Speed(reviews/sec):629.4 #Correct:32 #Tested:64 Testing Accuracy:50.0%
Progress:6.4% Speed(reviews/sec):626.9 #Correct:33 #Tested:65 Testing Accuracy:50.7%
Progress:6.5% Speed(reviews/sec):630.6 #Correct:33 #Tested:66 Testing Accuracy:50.0%
Progress:6.6% Speed(reviews/sec):634.1 #Correct:34 #Tested:67 Testing Accuracy:50.7%
Progress:6.7% Speed(reviews/sec):637.6 #Correct:34 #Tested:68 Testing Accuracy:50.0%
Progress:6.8% Speed(reviews/sec):641.1 #Correct:35 #Tested:69 Testing Accuracy:50.7%
Progress:6.9% Speed(reviews/sec):638.5 #Correct:35 #Tested:70 Testing Accuracy:50.0%
Progress:7.0% Speed(reviews/sec):636.0 #Correct:36 #Tested:71 Testing Accuracy:50.7%
Progress:7.1% Speed(reviews/sec):645.1 #Correct:36 #Tested:72 Testing Accuracy:50.0%
Progress:7.2% Speed(reviews/sec):648.3 #Correct:37 #Tested:73 Testing Accuracy:50.6%
Progress:7.3% Speed(reviews/sec):651.5 #Correct:37 #Tested:74 Testing Accuracy:50.0%
Progress:7.4% Speed(reviews/sec):654.6 #Correct:38 #Tested:75 Testing Accuracy:50.6%
Progress:7.5% Speed(reviews/sec):651.9 #Correct:38 #Tested:76 Testing Accuracy:50.0%
Progress:7.6% Speed(reviews/sec):654.9 #Correct:39 #Tested:77 Testing Accuracy:50.6%
Progress:7.7% Speed(reviews/sec):657.9 #Correct:39 #Tested:78 Testing Accuracy:50.0%
Progress:7.8% Speed(reviews/sec):660.8 #Correct:40 #Tested:79 Testing Accuracy:50.6%
Progress:7.9% Speed(reviews/sec):658.1 #Correct:40 #Tested:80 Testing Accuracy:50.0%
Progress:8.0% Speed(reviews/sec):661.0 #Correct:41 #Tested:81 Testing Accuracy:50.6%
Progress:8.1% Speed(reviews/sec):647.9 #Correct:41 #Tested:82 Testing Accuracy:50.0%
Progress:8.2% Speed(reviews/sec):645.6 #Correct:42 #Tested:83 Testing Accuracy:50.6%
Progress:8.3% Speed(reviews/sec):648.4 #Correct:42 #Tested:84 Testing Accuracy:50.0%
Progress:8.4% Speed(reviews/sec):651.1 #Correct:43 #Tested:85 Testing Accuracy:50.5%
Progress:8.5% Speed(reviews/sec):653.8 #Correct:43 #Tested:86 Testing Accuracy:50.0%
Progress:8.6% Speed(reviews/sec):651.5 #Correct:44 #Tested:87 Testing Accuracy:50.5%
Progress:8.7% Speed(reviews/sec):654.1 #Correct:44 #Tested:88 Testing Accuracy:50.0%
Progress:8.8% Speed(reviews/sec):656.7 #Correct:45 #Tested:89 Testing Accuracy:50.5%
Progress:8.9% Speed(reviews/sec):648.9 #Correct:45 #Tested:90 Testing Accuracy:50.0%
Progress:9.0% Speed(reviews/sec):651.6 #Correct:46 #Tested:91 Testing Accuracy:50.5%
Progress:9.1% Speed(reviews/sec):654.1 #Correct:46 #Tested:92 Testing Accuracy:50.0%
Progress:9.2% Speed(reviews/sec):656.6 #Correct:47 #Tested:93 Testing Accuracy:50.5%
Progress:9.3% Speed(reviews/sec):654.4 #Correct:47 #Tested:94 Testing Accuracy:50.0%
Progress:9.4% Speed(reviews/sec):656.8 #Correct:48 #Tested:95 Testing Accuracy:50.5%
Progress:9.5% Speed(reviews/sec):659.2 #Correct:48 #Tested:96 Testing Accuracy:50.0%
Progress:9.6% Speed(reviews/sec):661.6 #Correct:49 #Tested:97 Testing Accuracy:50.5%
Progress:9.7% Speed(reviews/sec):659.4 #Correct:49 #Tested:98 Testing Accuracy:50.0%
Progress:9.8% Speed(reviews/sec):661.7 #Correct:50 #Tested:99 Testing Accuracy:50.5%
Progress:9.9% Speed(reviews/sec):664.0 #Correct:50 #Tested:100 Testing Accuracy:50.0%
Progress:10.0% Speed(reviews/sec):666.3 #Correct:51 #Tested:101 Testing Accuracy:50.4%
Progress:10.1% Speed(reviews/sec):668.5 #Correct:51 #Tested:102 Testing Accuracy:50.0%
Progress:10.2% Speed(reviews/sec):670.7 #Correct:52 #Tested:103 Testing Accuracy:50.4%
Progress:10.3% Speed(reviews/sec):668.5 #Correct:52 #Tested:104 Testing Accuracy:50.0%
Progress:10.4% Speed(reviews/sec):670.6 #Correct:53 #Tested:105 Testing Accuracy:50.4%
Progress:10.5% Speed(reviews/sec):672.8 #Correct:53 #Tested:106 Testing Accuracy:50.0%
Progress:10.6% Speed(reviews/sec):674.9 #Correct:54 #Tested:107 Testing Accuracy:50.4%
Progress:10.7% Speed(reviews/sec):676.9 #Correct:54 #Tested:108 Testing Accuracy:50.0%
Progress:10.8% Speed(reviews/sec):679.0 #Correct:55 #Tested:109 Testing Accuracy:50.4%
Progress:10.9% Speed(reviews/sec):681.0 #Correct:55 #Tested:110 Testing Accuracy:50.0%
Progress:11.0% Speed(reviews/sec):683.0 #Correct:56 #Tested:111 Testing Accuracy:50.4%
Progress:11.1% Speed(reviews/sec):684.9 #Correct:56 #Tested:112 Testing Accuracy:50.0%
Progress:11.2% Speed(reviews/sec):678.6 #Correct:57 #Tested:113 Testing Accuracy:50.4%
Progress:11.3% Speed(reviews/sec):676.5 #Correct:57 #Tested:114 Testing Accuracy:50.0%
Progress:11.4% Speed(reviews/sec):674.4 #Correct:58 #Tested:115 Testing Accuracy:50.4%
Progress:11.5% Speed(reviews/sec):676.3 #Correct:58 #Tested:116 Testing Accuracy:50.0%
Progress:11.6% Speed(reviews/sec):678.2 #Correct:59 #Tested:117 Testing Accuracy:50.4%
Progress:11.7% Speed(reviews/sec):676.2 #Correct:59 #Tested:118 Testing Accuracy:50.0%
Progress:11.8% Speed(reviews/sec):674.2 #Correct:60 #Tested:119 Testing Accuracy:50.4%
Progress:11.9% Speed(reviews/sec):672.2 #Correct:60 #Tested:120 Testing Accuracy:50.0%
Progress:12.0% Speed(reviews/sec):670.3 #Correct:61 #Tested:121 Testing Accuracy:50.4%
Progress:12.1% Speed(reviews/sec):672.2 #Correct:61 #Tested:122 Testing Accuracy:50.0%
Progress:12.2% Speed(reviews/sec):674.0 #Correct:62 #Tested:123 Testing Accuracy:50.4%
Progress:12.3% Speed(reviews/sec):668.5 #Correct:62 #Tested:124 Testing Accuracy:50.0%
Progress:12.4% Speed(reviews/sec):670.3 #Correct:63 #Tested:125 Testing Accuracy:50.4%
Progress:12.5% Speed(reviews/sec):668.5 #Correct:63 #Tested:126 Testing Accuracy:50.0%
Progress:12.6% Speed(reviews/sec):670.2 #Correct:64 #Tested:127 Testing Accuracy:50.3%
Progress:12.7% Speed(reviews/sec):668.5 #Correct:64 #Tested:128 Testing Accuracy:50.0%
Progress:12.8% Speed(reviews/sec):670.2 #Correct:65 #Tested:129 Testing Accuracy:50.3%
Progress:12.9% Speed(reviews/sec):671.9 #Correct:65 #Tested:130 Testing Accuracy:50.0%
Progress:13.0% Speed(reviews/sec):673.6 #Correct:66 #Tested:131 Testing Accuracy:50.3%
Progress:13.1% Speed(reviews/sec):671.9 #Correct:66 #Tested:132 Testing Accuracy:50.0%
###Markdown
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
###Code
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):227.4 #Correct:1251 #Trained:2501 Training Accuracy:50.0%
Progress:20.8% Speed(reviews/sec):223.8 #Correct:2501 #Trained:5001 Training Accuracy:50.0%
Progress:31.2% Speed(reviews/sec):215.9 #Correct:3751 #Trained:7501 Training Accuracy:50.0%
Progress:41.6% Speed(reviews/sec):219.7 #Correct:5001 #Trained:10001 Training Accuracy:50.0%
Progress:52.0% Speed(reviews/sec):218.9 #Correct:6251 #Trained:12501 Training Accuracy:50.0%
Progress:62.5% Speed(reviews/sec):218.2 #Correct:7501 #Trained:15001 Training Accuracy:50.0%
Progress:72.9% Speed(reviews/sec):218.8 #Correct:8751 #Trained:17501 Training Accuracy:50.0%
Progress:83.3% Speed(reviews/sec):219.6 #Correct:10001 #Trained:20001 Training Accuracy:50.0%
Progress:93.7% Speed(reviews/sec):219.2 #Correct:11251 #Trained:22501 Training Accuracy:50.0%
Progress:99.9% Speed(reviews/sec):219.0 #Correct:12000 #Trained:24000 Training Accuracy:50.0%
###Markdown
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):168.5 #Correct:1248 #Trained:2501 Training Accuracy:49.9%
Progress:20.8% Speed(reviews/sec):165.7 #Correct:2498 #Trained:5001 Training Accuracy:49.9%
Progress:31.2% Speed(reviews/sec):164.7 #Correct:3748 #Trained:7501 Training Accuracy:49.9%
Progress:41.6% Speed(reviews/sec):163.9 #Correct:4998 #Trained:10001 Training Accuracy:49.9%
Progress:52.0% Speed(reviews/sec):163.4 #Correct:6248 #Trained:12501 Training Accuracy:49.9%
Progress:62.5% Speed(reviews/sec):162.9 #Correct:7491 #Trained:15001 Training Accuracy:49.9%
Progress:72.9% Speed(reviews/sec):162.1 #Correct:8741 #Trained:17501 Training Accuracy:49.9%
Progress:83.3% Speed(reviews/sec):161.8 #Correct:9991 #Trained:20001 Training Accuracy:49.9%
Progress:93.7% Speed(reviews/sec):161.4 #Correct:11241 #Trained:22501 Training Accuracy:49.9%
Progress:99.9% Speed(reviews/sec):161.3 #Correct:11990 #Trained:24000 Training Accuracy:49.9%
###Markdown
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):170.6 #Correct:1256 #Trained:2501 Training Accuracy:50.2%
Progress:20.8% Speed(reviews/sec):165.1 #Correct:2639 #Trained:5001 Training Accuracy:52.7%
Progress:31.2% Speed(reviews/sec):164.5 #Correct:4110 #Trained:7501 Training Accuracy:54.7%
Progress:41.6% Speed(reviews/sec):162.3 #Correct:5674 #Trained:10001 Training Accuracy:56.7%
Progress:52.0% Speed(reviews/sec):161.7 #Correct:7251 #Trained:12501 Training Accuracy:58.0%
Progress:62.5% Speed(reviews/sec):160.9 #Correct:8872 #Trained:15001 Training Accuracy:59.1%
Progress:72.9% Speed(reviews/sec):160.3 #Correct:10509 #Trained:17501 Training Accuracy:60.0%
Progress:83.3% Speed(reviews/sec):160.2 #Correct:12218 #Trained:20001 Training Accuracy:61.0%
Progress:93.7% Speed(reviews/sec):160.1 #Correct:13868 #Trained:22501 Training Accuracy:61.6%
Progress:99.9% Speed(reviews/sec):160.2 #Correct:14942 #Trained:24000 Training Accuracy:62.2%
###Markdown
With a learning rate of `0.001`, the network should finall have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to continue with Andrew's next lesson. Understanding Neural Noise
###Code
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
###Output
_____no_output_____
###Markdown
Project 4: Reducing Noise in Our Input Data**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:* Copy the `SentimentNetwork` class you created earlier into the following cell.* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 4"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
## New for Project 4: changed to set to 1 instead of add 1
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):1280. #Correct:857 #Tested:1000 Testing Accuracy:85.7%
###Markdown
End of Project 4 solution. Watch the next video to continue with Andrew's next lesson. Analyzing Inefficiencies in our Network
###Code
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
###Output
_____no_output_____
###Markdown
Project 5: Making our Network More Efficient**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Remove the `update_input_layer` function - you will not need it in this version.* Modify `init_network`:>* You no longer need a separate input layer, so remove any mention of `self.layer_0`>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero* Modify `train`:>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.>* Remove call to `update_input_layer`>* Use `self`'s `layer_1` instead of a local `layer_1` object.>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.* Modify `run`:>* Remove call to `update_input_layer` >* Use `self`'s `layer_1` instead of a local `layer_1` object.>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 5"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it once again.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):1180. #Correct:1797 #Trained:2501 Training Accuracy:71.8%
Progress:20.8% Speed(reviews/sec):1168. #Correct:3791 #Trained:5001 Training Accuracy:75.8%
Progress:31.2% Speed(reviews/sec):1168. #Correct:5877 #Trained:7501 Training Accuracy:78.3%
Progress:41.6% Speed(reviews/sec):1159. #Correct:8024 #Trained:10001 Training Accuracy:80.2%
Progress:52.0% Speed(reviews/sec):1160. #Correct:10160 #Trained:12501 Training Accuracy:81.2%
Progress:62.5% Speed(reviews/sec):1169. #Correct:12294 #Trained:15001 Training Accuracy:81.9%
Progress:72.9% Speed(reviews/sec):1169. #Correct:14408 #Trained:17501 Training Accuracy:82.3%
Progress:83.3% Speed(reviews/sec):1159. #Correct:16579 #Trained:20001 Training Accuracy:82.8%
Progress:93.7% Speed(reviews/sec):1152. #Correct:18758 #Trained:22501 Training Accuracy:83.3%
Progress:97.4% Speed(reviews/sec):1153. #Correct:19524 #Trained:23385 Training Accuracy:83.4%
###Markdown
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):1623. #Correct:846 #Tested:1000 Testing Accuracy:84.6%
###Markdown
End of Project 5 solution. Watch the next video to continue with Andrew's next lesson. Further Noise Reduction
###Code
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
###Output
_____no_output_____
###Markdown
Project 6: Reducing Noise by Strategically Reducing the Vocabulary**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Modify `pre_process_data`:>* Add two additional parameters: `min_count` and `polarity_cutoff`>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. >* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`* Modify `__init__`:>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data` The following code is the same as the previous project, with project-specific changes marked with `"New for Project 6"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
## New for Project 6: added min_count and polarity_cutoff parameters
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count(int) - Words should only be added to the vocabulary
if they occur more than this many times
polarity_cutoff(float) - The absolute value of a word's positive-to-negative
ratio must be at least this big to be considered.
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
## New for Project 6: added min_count and polarity_cutoff parameters
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
#
## end New for Project 6
## ----------------------------------------
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a small polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):1903. #Correct:859 #Tested:1000 Testing Accuracy:85.9%
###Markdown
Run the following cell to train your network with a much larger polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):6770. #Correct:2114 #Trained:2501 Training Accuracy:84.5%
Progress:20.8% Speed(reviews/sec):6416. #Correct:4235 #Trained:5001 Training Accuracy:84.6%
Progress:31.2% Speed(reviews/sec):6389. #Correct:6362 #Trained:7501 Training Accuracy:84.8%
Progress:41.6% Speed(reviews/sec):6406. #Correct:8513 #Trained:10001 Training Accuracy:85.1%
Progress:52.0% Speed(reviews/sec):6447. #Correct:10641 #Trained:12501 Training Accuracy:85.1%
Progress:62.5% Speed(reviews/sec):6367. #Correct:12796 #Trained:15001 Training Accuracy:85.3%
Progress:72.9% Speed(reviews/sec):6376. #Correct:14911 #Trained:17501 Training Accuracy:85.2%
Progress:83.3% Speed(reviews/sec):6405. #Correct:17077 #Trained:20001 Training Accuracy:85.3%
Progress:93.7% Speed(reviews/sec):6403. #Correct:19258 #Trained:22501 Training Accuracy:85.5%
Progress:99.9% Speed(reviews/sec):6424. #Correct:20552 #Trained:24000 Training Accuracy:85.6%
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):6031. #Correct:822 #Tested:1000 Testing Accuracy:82.2%
###Markdown
End of Project 6 solution. Watch the next video to continue with Andrew's next lesson. Analysis: What's Going on in the Weights?
###Code
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
###Output
/anaconda3/envs/sentiment/lib/python3.6/site-packages/bokeh/util/deprecation.py:34: BokehDeprecationWarning:
Supplying a user-defined data source AND iterable values to glyph methods is deprecated.
See https://github.com/bokeh/bokeh/issues/2056 for more information.
warn(message)
/anaconda3/envs/sentiment/lib/python3.6/site-packages/bokeh/util/deprecation.py:34: BokehDeprecationWarning:
Supplying a user-defined data source AND iterable values to glyph methods is deprecated.
See https://github.com/bokeh/bokeh/issues/2056 for more information.
warn(message)
###Markdown
Sentiment Classification & How To "Frame Problems" for a Neural Networkby Andrew Trask- **Twitter**: @iamtrask- **Blog**: http://iamtrask.github.io What You Should Already Know- neural networks, forward and back-propagation- stochastic gradient descent- mean squared error- and train/test splits Where to Get Help if You Need it- Re-watch previous Udacity Lectures- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)- Shoot me a tweet @iamtrask Tutorial Outline:- Intro: The Importance of "Framing a Problem" (this lesson)- [Curate a Dataset](lesson_1)- [Developing a "Predictive Theory"](lesson_2)- [**PROJECT 1**: Quick Theory Validation](project_1)- [Transforming Text to Numbers](lesson_3)- [**PROJECT 2**: Creating the Input/Output Data](project_2)- Putting it all together in a Neural Network (video only - nothing in notebook)- [**PROJECT 3**: Building our Neural Network](project_3)- [Understanding Neural Noise](lesson_4)- [**PROJECT 4**: Making Learning Faster by Reducing Noise](project_4)- [Analyzing Inefficiencies in our Network](lesson_5)- [**PROJECT 5**: Making our Network Train and Run Faster](project_5)- [Further Noise Reduction](lesson_6)- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](project_6)- [Analysis: What's going on in the weights?](lesson_7) Lesson: Curate a Dataset
###Code
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
###Output
_____no_output_____
###Markdown
**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.
###Code
len(reviews)
reviews[0]
labels[0]
###Output
_____no_output_____
###Markdown
Lesson: Develop a Predictive Theory
###Code
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
###Output
_____no_output_____
###Markdown
Project 1: Quick Theory ValidationThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.You'll find the [Counter](https://docs.python.org/2/library/collections.htmlcollections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.
###Code
from collections import Counter
import numpy as np
###Output
_____no_output_____
###Markdown
We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
###Code
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
###Output
_____no_output_____
###Markdown
**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.
###Code
# Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
###Output
_____no_output_____
###Markdown
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
###Code
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
###Output
_____no_output_____
###Markdown
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`. >Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
###Code
pos_neg_ratios = Counter()
# Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
###Output
_____no_output_____
###Markdown
Examine the ratios you've calculated for a few words:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
_____no_output_____
###Markdown
Looking closely at the values you just calculated, we see the following: * Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.* Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.* When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms.**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
###Code
# Convert ratios to logs
for word,ratio in pos_neg_ratios.most_common():
pos_neg_ratios[word] = np.log(ratio)
###Output
_____no_output_____
###Markdown
**NOTE:** In the video, Andrew uses the following formulas for the previous cell:> * For any postive words, convert the ratio using `np.log(ratio)`> * For any negative words, convert the ratio using `-np.log(1/(ratio + 0.01))`These won't give you the exact same results as the simpler code we show in this notebook, but the values will be similar. In case that second equation looks strange, here's what it's doing: First, it divides one by a very small number, which will produce a larger positive number. Then, it takes the `log` of that, which produces numbers similar to the ones for the postive words. Finally, it negates the values by adding that minus sign up front. The results are extremely positive and extremely negative words having positive-to-negative ratios with similar magnitudes but oppositite signs, just like when we use `np.log(ratio)`. Examine the new ratios you've calculated for the same words from before:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
_____no_output_____
###Markdown
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)You should continue to see values similar to the earlier ones we checked – neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.
###Code
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
###Output
_____no_output_____
###Markdown
End of Project 1. Watch the next video to continue with Andrew's next lesson. Transforming Text into Numbers
###Code
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
###Output
_____no_output_____
###Markdown
Project 2: Creating the Input/Output Data**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.htmlsets) named `vocab` that contains every word in the vocabulary.
###Code
vocab = set(total_counts.keys())
###Output
_____no_output_____
###Markdown
Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**
###Code
vocab_size = len(vocab)
print(vocab_size)
###Output
_____no_output_____
###Markdown
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.
###Code
from IPython.display import Image
Image(filename='sentiment_network_2.png')
###Output
_____no_output_____
###Markdown
**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns.
###Code
layer_0 = np.zeros((1,vocab_size))
###Output
_____no_output_____
###Markdown
Run the following cell. It should display `(1, 74074)`
###Code
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
###Output
_____no_output_____
###Markdown
`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
###Code
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `update_input_layer`. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside `layer_0`.
###Code
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
# count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
###Output
_____no_output_____
###Markdown
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`.
###Code
update_input_layer(reviews[0])
layer_0
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`, depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.
###Code
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
if(label == 'POSITIVE'):
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.
###Code
labels[0]
get_target_for_label(labels[0])
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.
###Code
labels[1]
get_target_for_label(labels[1])
###Output
_____no_output_____
###Markdown
End of Project 2 solution. Watch the next video to continue with Andrew's next lesson. Project 3: Building a Neural Network **TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions- Ensure `train` trains over the entire corpus Where to Get Help if You Need it- Re-watch previous week's Udacity Lectures- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
###Output
_____no_output_____
###Markdown
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). **We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
###Code
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
With a learning rate of `0.001`, the network should finally have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to continue with Andrew's next lesson. Understanding Neural Noise
###Code
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
###Output
_____no_output_____
###Markdown
Project 4: Reducing Noise in Our Input Data**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:* Copy the `SentimentNetwork` class you created earlier into the following cell.* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 4"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
## New for Project 4: changed to set to 1 instead of add 1
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 4 solution. Watch the next video to continue with Andrew's next lesson. Analyzing Inefficiencies in our Network
###Code
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
###Output
_____no_output_____
###Markdown
Project 5: Making our Network More Efficient**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Remove the `update_input_layer` function - you will not need it in this version.* Modify `init_network`:>* You no longer need a separate input layer, so remove any mention of `self.layer_0`>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero* Modify `train`:>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.>* Remove call to `update_input_layer`>* Use `self`'s `layer_1` instead of a local `layer_1` object.>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.* Modify `run`:>* Remove call to `update_input_layer` >* Use `self`'s `layer_1` instead of a local `layer_1` object.>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 5"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it once again.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 5 solution. Watch the next video to continue with Andrew's next lesson. Further Noise Reduction
###Code
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
###Output
_____no_output_____
###Markdown
Project 6: Reducing Noise by Strategically Reducing the Vocabulary**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Modify `pre_process_data`:>* Add two additional parameters: `min_count` and `polarity_cutoff`>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. >* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`* Modify `__init__`:>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data` The following code is the same as the previous project, with project-specific changes marked with `"New for Project 6"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
## New for Project 6: added min_count and polarity_cutoff parameters
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count(int) - Words should only be added to the vocabulary
if they occur more than this many times
polarity_cutoff(float) - The absolute value of a word's positive-to-negative
ratio must be at least this big to be considered.
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
## New for Project 6: added min_count and polarity_cutoff parameters
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
#
## end New for Project 6
## ----------------------------------------
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a small polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a much larger polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 6 solution. Watch the next video to continue with Andrew's next lesson. Analysis: What's Going on in the Weights?
###Code
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
###Output
_____no_output_____
###Markdown
Sentiment Classification & How To "Frame Problems" for a Neural Networkby Andrew Trask- **Twitter**: @iamtrask- **Blog**: http://iamtrask.github.io What You Should Already Know- neural networks, forward and back-propagation- stochastic gradient descent- mean squared error- and train/test splits Where to Get Help if You Need it- Re-watch previous Udacity Lectures- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)- Shoot me a tweet @iamtrask Tutorial Outline:- Intro: The Importance of "Framing a Problem" (this lesson)- [Curate a Dataset](lesson_1)- [Developing a "Predictive Theory"](lesson_2)- [**PROJECT 1**: Quick Theory Validation](project_1)- [Transforming Text to Numbers](lesson_3)- [**PROJECT 2**: Creating the Input/Output Data](project_2)- Putting it all together in a Neural Network (video only - nothing in notebook)- [**PROJECT 3**: Building our Neural Network](project_3)- [Understanding Neural Noise](lesson_4)- [**PROJECT 4**: Making Learning Faster by Reducing Noise](project_4)- [Analyzing Inefficiencies in our Network](lesson_5)- [**PROJECT 5**: Making our Network Train and Run Faster](project_5)- [Further Noise Reduction](lesson_6)- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](project_6)- [Analysis: What's going on in the weights?](lesson_7) Lesson: Curate a Dataset
###Code
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
###Output
_____no_output_____
###Markdown
**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.
###Code
len(reviews)
reviews[0]
labels[0]
###Output
_____no_output_____
###Markdown
Lesson: Develop a Predictive Theory
###Code
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
###Output
labels.txt : reviews.txt
NEGATIVE : this movie is terrible but it has some good effects . ...
POSITIVE : adrian pasdar is excellent is this film . he makes a fascinating woman . ...
NEGATIVE : comment this movie is impossible . is terrible very improbable bad interpretat...
POSITIVE : excellent episode movie ala pulp fiction . days suicides . it doesnt get more...
NEGATIVE : if you haven t seen this it s terrible . it is pure trash . i saw this about ...
POSITIVE : this schiffer guy is a real genius the movie is of excellent quality and both e...
###Markdown
Project 1: Quick Theory ValidationThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.You'll find the [Counter](https://docs.python.org/2/library/collections.htmlcollections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.
###Code
from collections import Counter
import numpy as np
###Output
_____no_output_____
###Markdown
We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
###Code
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
###Output
_____no_output_____
###Markdown
**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.
###Code
# Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
###Output
_____no_output_____
###Markdown
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
###Code
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
###Output
_____no_output_____
###Markdown
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`. >Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
###Code
pos_neg_ratios = Counter()
# Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
###Output
_____no_output_____
###Markdown
Examine the ratios you've calculated for a few words:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
_____no_output_____
###Markdown
Looking closely at the values you just calculated, we see the following: * Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.* Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.* When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms.**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
###Code
# Convert ratios to logs
for word,ratio in pos_neg_ratios.most_common():
pos_neg_ratios[word] = np.log(ratio)
###Output
_____no_output_____
###Markdown
**NOTE:** In the video, Andrew uses the following formulas for the previous cell:> * For any postive words, convert the ratio using `np.log(ratio)`> * For any negative words, convert the ratio using `-np.log(1/(ratio + 0.01))`These won't give you the exact same results as the simpler code we show in this notebook, but the values will be similar. In case that second equation looks strange, here's what it's doing: First, it divides one by a very small number, which will produce a larger positive number. Then, it takes the `log` of that, which produces numbers similar to the ones for the postive words. Finally, it negates the values by adding that minus sign up front. The results are extremely positive and extremely negative words having positive-to-negative ratios with similar magnitudes but oppositite signs, just like when we use `np.log(ratio)`. Examine the new ratios you've calculated for the same words from before:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
_____no_output_____
###Markdown
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)You should continue to see values similar to the earlier ones we checked – neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.
###Code
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
###Output
_____no_output_____
###Markdown
End of Project 1. Watch the next video to continue with Andrew's next lesson. Transforming Text into Numbers
###Code
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
###Output
_____no_output_____
###Markdown
Project 2: Creating the Input/Output Data**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.htmlsets) named `vocab` that contains every word in the vocabulary.
###Code
vocab = set(total_counts.keys())
###Output
_____no_output_____
###Markdown
Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**
###Code
vocab_size = len(vocab)
print(vocab_size)
###Output
_____no_output_____
###Markdown
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.
###Code
from IPython.display import Image
Image(filename='sentiment_network_2.png')
###Output
_____no_output_____
###Markdown
**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns.
###Code
layer_0 = np.zeros((1,vocab_size))
###Output
_____no_output_____
###Markdown
Run the following cell. It should display `(1, 74074)`
###Code
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
###Output
_____no_output_____
###Markdown
`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
###Code
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `update_input_layer`. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside `layer_0`.
###Code
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
# count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
###Output
_____no_output_____
###Markdown
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`.
###Code
update_input_layer(reviews[0])
layer_0
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`, depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.
###Code
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
if(label == 'POSITIVE'):
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.
###Code
labels[0]
get_target_for_label(labels[0])
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.
###Code
labels[1]
get_target_for_label(labels[1])
###Output
_____no_output_____
###Markdown
End of Project 2 solution. Watch the next video to continue with Andrew's next lesson. Project 3: Building a Neural Network **TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions- Ensure `train` trains over the entire corpus Where to Get Help if You Need it- Re-watch previous week's Udacity Lectures- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
###Output
_____no_output_____
###Markdown
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). **We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
###Code
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
With a learning rate of `0.001`, the network should finally have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to continue with Andrew's next lesson. Understanding Neural Noise
###Code
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
###Output
_____no_output_____
###Markdown
Project 4: Reducing Noise in Our Input Data**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:* Copy the `SentimentNetwork` class you created earlier into the following cell.* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 4"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
## New for Project 4: changed to set to 1 instead of add 1
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 4 solution. Watch the next video to continue with Andrew's next lesson. Analyzing Inefficiencies in our Network
###Code
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
###Output
_____no_output_____
###Markdown
Project 5: Making our Network More Efficient**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Remove the `update_input_layer` function - you will not need it in this version.* Modify `init_network`:>* You no longer need a separate input layer, so remove any mention of `self.layer_0`>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero* Modify `train`:>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.>* Remove call to `update_input_layer`>* Use `self`'s `layer_1` instead of a local `layer_1` object.>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.* Modify `run`:>* Remove call to `update_input_layer` >* Use `self`'s `layer_1` instead of a local `layer_1` object.>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 5"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it once again.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 5 solution. Watch the next video to continue with Andrew's next lesson. Further Noise Reduction
###Code
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
###Output
_____no_output_____
###Markdown
Project 6: Reducing Noise by Strategically Reducing the Vocabulary**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Modify `pre_process_data`:>* Add two additional parameters: `min_count` and `polarity_cutoff`>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. >* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`* Modify `__init__`:>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data` The following code is the same as the previous project, with project-specific changes marked with `"New for Project 6"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
## New for Project 6: added min_count and polarity_cutoff parameters
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count(int) - Words should only be added to the vocabulary
if they occur more than this many times
polarity_cutoff(float) - The absolute value of a word's positive-to-negative
ratio must be at least this big to be considered.
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
## New for Project 6: added min_count and polarity_cutoff parameters
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
#
## end New for Project 6
## ----------------------------------------
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a small polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a much larger polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 6 solution. Watch the next video to continue with Andrew's next lesson. Analysis: What's Going on in the Weights?
###Code
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
###Output
_____no_output_____
###Markdown
Sentiment Classification & How To "Frame Problems" for a Neural Networkby Andrew Trask- **Twitter**: @iamtrask- **Blog**: http://iamtrask.github.io What You Should Already Know- neural networks, forward and back-propagation- stochastic gradient descent- mean squared error- and train/test splits Where to Get Help if You Need it- Re-watch previous Udacity Lectures- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)- Shoot me a tweet @iamtrask Tutorial Outline:- Intro: The Importance of "Framing a Problem" (this lesson)- [Curate a Dataset](lesson_1)- [Developing a "Predictive Theory"](lesson_2)- [**PROJECT 1**: Quick Theory Validation](project_1)- [Transforming Text to Numbers](lesson_3)- [**PROJECT 2**: Creating the Input/Output Data](project_2)- Putting it all together in a Neural Network (video only - nothing in notebook)- [**PROJECT 3**: Building our Neural Network](project_3)- [Understanding Neural Noise](lesson_4)- [**PROJECT 4**: Making Learning Faster by Reducing Noise](project_4)- [Analyzing Inefficiencies in our Network](lesson_5)- [**PROJECT 5**: Making our Network Train and Run Faster](project_5)- [Further Noise Reduction](lesson_6)- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](project_6)- [Analysis: What's going on in the weights?](lesson_7) Lesson: Curate a Dataset
###Code
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
###Output
_____no_output_____
###Markdown
**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.
###Code
len(reviews)
reviews[0]
labels[0]
###Output
_____no_output_____
###Markdown
Lesson: Develop a Predictive Theory
###Code
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
###Output
labels.txt : reviews.txt
NEGATIVE : this movie is terrible but it has some good effects . ...
POSITIVE : adrian pasdar is excellent is this film . he makes a fascinating woman . ...
NEGATIVE : comment this movie is impossible . is terrible very improbable bad interpretat...
POSITIVE : excellent episode movie ala pulp fiction . days suicides . it doesnt get more...
NEGATIVE : if you haven t seen this it s terrible . it is pure trash . i saw this about ...
POSITIVE : this schiffer guy is a real genius the movie is of excellent quality and both e...
###Markdown
Project 1: Quick Theory ValidationThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.You'll find the [Counter](https://docs.python.org/2/library/collections.htmlcollections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.
###Code
from collections import Counter
import numpy as np
###Output
_____no_output_____
###Markdown
We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
###Code
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
###Output
_____no_output_____
###Markdown
**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.
###Code
# Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
###Output
_____no_output_____
###Markdown
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
###Code
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
###Output
_____no_output_____
###Markdown
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`. >Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
###Code
pos_neg_ratios = Counter()
# Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
###Output
_____no_output_____
###Markdown
Examine the ratios you've calculated for a few words:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 1.0607993145235326
Pos-to-neg ratio for 'amazing' = 4.022813688212928
Pos-to-neg ratio for 'terrible' = 0.17744252873563218
###Markdown
Looking closely at the values you just calculated, we see the following: * Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.* Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.* When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms.**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
###Code
# Convert ratios to logs
for word,ratio in pos_neg_ratios.most_common():
pos_neg_ratios[word] = np.log(ratio)
###Output
_____no_output_____
###Markdown
**NOTE:** In the video, Andrew uses the following formulas for the previous cell:> * For any postive words, convert the ratio using `np.log(ratio)`> * For any negative words, convert the ratio using `-np.log(1/(ratio + 0.01))`These won't give you the exact same results as the simpler code we show in this notebook, but the values will be similar. In case that second equation looks strange, here's what it's doing: First, it divides one by a very small number, which will produce a larger positive number. Then, it takes the `log` of that, which produces numbers similar to the ones for the postive words. Finally, it negates the values by adding that minus sign up front. The results are extremely positive and extremely negative words having positive-to-negative ratios with similar magnitudes but oppositite signs, just like when we use `np.log(ratio)`. Examine the new ratios you've calculated for the same words from before:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 0.05902269426102881
Pos-to-neg ratio for 'amazing' = 1.3919815802404802
Pos-to-neg ratio for 'terrible' = -1.7291085042663878
###Markdown
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)You should continue to see values similar to the earlier ones we checked – neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.
###Code
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
###Output
_____no_output_____
###Markdown
End of Project 1. Watch the next video to continue with Andrew's next lesson. Transforming Text into Numbers
###Code
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
###Output
_____no_output_____
###Markdown
Project 2: Creating the Input/Output Data**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.htmlsets) named `vocab` that contains every word in the vocabulary.
###Code
vocab = set(total_counts.keys())
###Output
_____no_output_____
###Markdown
Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**
###Code
vocab_size = len(vocab)
print(vocab_size)
###Output
74074
###Markdown
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.
###Code
from IPython.display import Image
Image(filename='sentiment_network_2.png')
###Output
_____no_output_____
###Markdown
**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns.
###Code
layer_0 = np.zeros((1,vocab_size))
###Output
_____no_output_____
###Markdown
Run the following cell. It should display `(1, 74074)`
###Code
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
###Output
_____no_output_____
###Markdown
`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
###Code
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `update_input_layer`. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside `layer_0`.
###Code
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
# count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
###Output
_____no_output_____
###Markdown
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`.
###Code
update_input_layer(reviews[0])
layer_0
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`, depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.
###Code
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
if(label == 'POSITIVE'):
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.
###Code
labels[0]
get_target_for_label(labels[0])
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.
###Code
labels[1]
get_target_for_label(labels[1])
###Output
_____no_output_____
###Markdown
End of Project 2 solution. Watch the next video to continue with Andrew's next lesson. Project 3: Building a Neural Network **TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions- Ensure `train` trains over the entire corpus Where to Get Help if You Need it- Re-watch previous week's Udacity Lectures- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
###Output
_____no_output_____
###Markdown
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). **We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):1065. #Correct:500 #Tested:1000 Testing Accuracy:50.0%
###Markdown
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
###Code
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):227.4 #Correct:1251 #Trained:2501 Training Accuracy:50.0%
Progress:20.8% Speed(reviews/sec):223.8 #Correct:2501 #Trained:5001 Training Accuracy:50.0%
Progress:31.2% Speed(reviews/sec):215.9 #Correct:3751 #Trained:7501 Training Accuracy:50.0%
Progress:41.6% Speed(reviews/sec):219.7 #Correct:5001 #Trained:10001 Training Accuracy:50.0%
Progress:52.0% Speed(reviews/sec):218.9 #Correct:6251 #Trained:12501 Training Accuracy:50.0%
Progress:62.5% Speed(reviews/sec):218.2 #Correct:7501 #Trained:15001 Training Accuracy:50.0%
Progress:72.9% Speed(reviews/sec):218.8 #Correct:8751 #Trained:17501 Training Accuracy:50.0%
Progress:83.3% Speed(reviews/sec):219.6 #Correct:10001 #Trained:20001 Training Accuracy:50.0%
Progress:93.7% Speed(reviews/sec):219.2 #Correct:11251 #Trained:22501 Training Accuracy:50.0%
Progress:99.9% Speed(reviews/sec):219.0 #Correct:12000 #Trained:24000 Training Accuracy:50.0%
###Markdown
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):168.5 #Correct:1248 #Trained:2501 Training Accuracy:49.9%
Progress:20.8% Speed(reviews/sec):165.7 #Correct:2498 #Trained:5001 Training Accuracy:49.9%
Progress:31.2% Speed(reviews/sec):164.7 #Correct:3748 #Trained:7501 Training Accuracy:49.9%
Progress:41.6% Speed(reviews/sec):163.9 #Correct:4998 #Trained:10001 Training Accuracy:49.9%
Progress:52.0% Speed(reviews/sec):163.4 #Correct:6248 #Trained:12501 Training Accuracy:49.9%
Progress:62.5% Speed(reviews/sec):162.9 #Correct:7491 #Trained:15001 Training Accuracy:49.9%
Progress:72.9% Speed(reviews/sec):162.1 #Correct:8741 #Trained:17501 Training Accuracy:49.9%
Progress:83.3% Speed(reviews/sec):161.8 #Correct:9991 #Trained:20001 Training Accuracy:49.9%
Progress:93.7% Speed(reviews/sec):161.4 #Correct:11241 #Trained:22501 Training Accuracy:49.9%
Progress:99.9% Speed(reviews/sec):161.3 #Correct:11990 #Trained:24000 Training Accuracy:49.9%
###Markdown
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):170.6 #Correct:1256 #Trained:2501 Training Accuracy:50.2%
Progress:20.8% Speed(reviews/sec):165.1 #Correct:2639 #Trained:5001 Training Accuracy:52.7%
Progress:31.2% Speed(reviews/sec):164.5 #Correct:4110 #Trained:7501 Training Accuracy:54.7%
Progress:41.6% Speed(reviews/sec):162.3 #Correct:5674 #Trained:10001 Training Accuracy:56.7%
Progress:52.0% Speed(reviews/sec):161.7 #Correct:7251 #Trained:12501 Training Accuracy:58.0%
Progress:62.5% Speed(reviews/sec):160.9 #Correct:8872 #Trained:15001 Training Accuracy:59.1%
Progress:72.9% Speed(reviews/sec):160.3 #Correct:10509 #Trained:17501 Training Accuracy:60.0%
Progress:83.3% Speed(reviews/sec):160.2 #Correct:12218 #Trained:20001 Training Accuracy:61.0%
Progress:93.7% Speed(reviews/sec):160.1 #Correct:13868 #Trained:22501 Training Accuracy:61.6%
Progress:99.9% Speed(reviews/sec):160.2 #Correct:14942 #Trained:24000 Training Accuracy:62.2%
###Markdown
With a learning rate of `0.001`, the network should finally have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to continue with Andrew's next lesson. Understanding Neural Noise
###Code
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
###Output
_____no_output_____
###Markdown
Project 4: Reducing Noise in Our Input Data**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:* Copy the `SentimentNetwork` class you created earlier into the following cell.* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 4"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
## New for Project 4: changed to set to 1 instead of add 1
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):1280. #Correct:857 #Tested:1000 Testing Accuracy:85.7%
###Markdown
End of Project 4 solution. Watch the next video to continue with Andrew's next lesson. Analyzing Inefficiencies in our Network
###Code
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
###Output
_____no_output_____
###Markdown
Project 5: Making our Network More Efficient**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Remove the `update_input_layer` function - you will not need it in this version.* Modify `init_network`:>* You no longer need a separate input layer, so remove any mention of `self.layer_0`>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero* Modify `train`:>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.>* Remove call to `update_input_layer`>* Use `self`'s `layer_1` instead of a local `layer_1` object.>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.* Modify `run`:>* Remove call to `update_input_layer` >* Use `self`'s `layer_1` instead of a local `layer_1` object.>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 5"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it once again.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):1166. #Correct:1694 #Trained:2501 Training Accuracy:67.7%
Progress:20.8% Speed(reviews/sec):1143. #Correct:3678 #Trained:5001 Training Accuracy:73.5%
Progress:31.2% Speed(reviews/sec):1142. #Correct:5752 #Trained:7501 Training Accuracy:76.6%
Progress:41.6% Speed(reviews/sec):1150. #Correct:7880 #Trained:10001 Training Accuracy:78.7%
Progress:52.0% Speed(reviews/sec):1145. #Correct:10014 #Trained:12501 Training Accuracy:80.1%
Progress:62.5% Speed(reviews/sec):1146. #Correct:12143 #Trained:15001 Training Accuracy:80.9%
Progress:72.9% Speed(reviews/sec):1145. #Correct:14269 #Trained:17501 Training Accuracy:81.5%
Progress:83.3% Speed(reviews/sec):1139. #Correct:16449 #Trained:20001 Training Accuracy:82.2%
Progress:93.7% Speed(reviews/sec):1132. #Correct:18629 #Trained:22501 Training Accuracy:82.7%
Progress:99.9% Speed(reviews/sec):1130. #Correct:19945 #Trained:24000 Training Accuracy:83.1%
###Markdown
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):1623. #Correct:846 #Tested:1000 Testing Accuracy:84.6%
###Markdown
End of Project 5 solution. Watch the next video to continue with Andrew's next lesson. Further Noise Reduction
###Code
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
###Output
_____no_output_____
###Markdown
Project 6: Reducing Noise by Strategically Reducing the Vocabulary**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Modify `pre_process_data`:>* Add two additional parameters: `min_count` and `polarity_cutoff`>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. >* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`* Modify `__init__`:>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data` The following code is the same as the previous project, with project-specific changes marked with `"New for Project 6"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
## New for Project 6: added min_count and polarity_cutoff parameters
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count(int) - Words should only be added to the vocabulary
if they occur more than this many times
polarity_cutoff(float) - The absolute value of a word's positive-to-negative
ratio must be at least this big to be considered.
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
## New for Project 6: added min_count and polarity_cutoff parameters
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
#
## end New for Project 6
## ----------------------------------------
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a small polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):1307. #Correct:1994 #Trained:2501 Training Accuracy:79.7%
Progress:20.8% Speed(reviews/sec):1281. #Correct:4063 #Trained:5001 Training Accuracy:81.2%
Progress:31.2% Speed(reviews/sec):1284. #Correct:6176 #Trained:7501 Training Accuracy:82.3%
Progress:41.6% Speed(reviews/sec):1288. #Correct:8336 #Trained:10001 Training Accuracy:83.3%
Progress:52.0% Speed(reviews/sec):1286. #Correct:10501 #Trained:12501 Training Accuracy:84.0%
Progress:62.5% Speed(reviews/sec):1287. #Correct:12641 #Trained:15001 Training Accuracy:84.2%
Progress:72.9% Speed(reviews/sec):1283. #Correct:14782 #Trained:17501 Training Accuracy:84.4%
Progress:83.3% Speed(reviews/sec):1279. #Correct:16954 #Trained:20001 Training Accuracy:84.7%
Progress:93.7% Speed(reviews/sec):1276. #Correct:19143 #Trained:22501 Training Accuracy:85.0%
Progress:99.9% Speed(reviews/sec):1275. #Correct:20461 #Trained:24000 Training Accuracy:85.2%
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:99.9% Speed(reviews/sec):1903. #Correct:859 #Tested:1000 Testing Accuracy:85.9%
###Markdown
Run the following cell to train your network with a much larger polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):6770. #Correct:2114 #Trained:2501 Training Accuracy:84.5%
Progress:20.8% Speed(reviews/sec):6416. #Correct:4235 #Trained:5001 Training Accuracy:84.6%
Progress:31.2% Speed(reviews/sec):6389. #Correct:6362 #Trained:7501 Training Accuracy:84.8%
Progress:41.6% Speed(reviews/sec):6406. #Correct:8513 #Trained:10001 Training Accuracy:85.1%
Progress:52.0% Speed(reviews/sec):6447. #Correct:10641 #Trained:12501 Training Accuracy:85.1%
Progress:62.5% Speed(reviews/sec):6367. #Correct:12796 #Trained:15001 Training Accuracy:85.3%
Progress:72.9% Speed(reviews/sec):6376. #Correct:14911 #Trained:17501 Training Accuracy:85.2%
Progress:83.3% Speed(reviews/sec):6405. #Correct:17077 #Trained:20001 Training Accuracy:85.3%
Progress:93.7% Speed(reviews/sec):6403. #Correct:19258 #Trained:22501 Training Accuracy:85.5%
Progress:99.9% Speed(reviews/sec):6424. #Correct:20552 #Trained:24000 Training Accuracy:85.6%
###Markdown
Sentiment Classification & How To "Frame Problems" for a Neural Networkby Andrew Trask- **Twitter**: @iamtrask- **Blog**: http://iamtrask.github.io What You Should Already Know- neural networks, forward and back-propagation- stochastic gradient descent- mean squared error- and train/test splits Where to Get Help if You Need it- Re-watch previous Udacity Lectures- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)- Shoot me a tweet @iamtrask Tutorial Outline:- Intro: The Importance of "Framing a Problem" (this lesson)- [Curate a Dataset](lesson_1)- [Developing a "Predictive Theory"](lesson_2)- [**PROJECT 1**: Quick Theory Validation](project_1)- [Transforming Text to Numbers](lesson_3)- [**PROJECT 2**: Creating the Input/Output Data](project_2)- Putting it all together in a Neural Network (video only - nothing in notebook)- [**PROJECT 3**: Building our Neural Network](project_3)- [Understanding Neural Noise](lesson_4)- [**PROJECT 4**: Making Learning Faster by Reducing Noise](project_4)- [Analyzing Inefficiencies in our Network](lesson_5)- [**PROJECT 5**: Making our Network Train and Run Faster](project_5)- [Further Noise Reduction](lesson_6)- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](project_6)- [Analysis: What's going on in the weights?](lesson_7) Lesson: Curate a Dataset
###Code
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
###Output
_____no_output_____
###Markdown
**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.
###Code
len(reviews)
reviews[0]
labels[0]
###Output
_____no_output_____
###Markdown
Lesson: Develop a Predictive Theory
###Code
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
###Output
labels.txt : reviews.txt
NEGATIVE : this movie is terrible but it has some good effects . ...
POSITIVE : adrian pasdar is excellent is this film . he makes a fascinating woman . ...
NEGATIVE : comment this movie is impossible . is terrible very improbable bad interpretat...
POSITIVE : excellent episode movie ala pulp fiction . days suicides . it doesnt get more...
NEGATIVE : if you haven t seen this it s terrible . it is pure trash . i saw this about ...
POSITIVE : this schiffer guy is a real genius the movie is of excellent quality and both e...
###Markdown
Project 1: Quick Theory ValidationThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.You'll find the [Counter](https://docs.python.org/2/library/collections.htmlcollections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.
###Code
from collections import Counter
import numpy as np
###Output
_____no_output_____
###Markdown
We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
###Code
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
###Output
_____no_output_____
###Markdown
**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.
###Code
# Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
###Output
_____no_output_____
###Markdown
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
###Code
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
###Output
_____no_output_____
###Markdown
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`. >Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
###Code
pos_neg_ratios = Counter()
# Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
###Output
_____no_output_____
###Markdown
Examine the ratios you've calculated for a few words:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 1.0607993145235326
Pos-to-neg ratio for 'amazing' = 4.022813688212928
Pos-to-neg ratio for 'terrible' = 0.17744252873563218
###Markdown
Looking closely at the values you just calculated, we see the following: * Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.* Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.* When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms.**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
###Code
# Convert ratios to logs
for word,ratio in pos_neg_ratios.most_common():
pos_neg_ratios[word] = np.log(ratio)
###Output
_____no_output_____
###Markdown
**NOTE:** In the video, Andrew uses the following formulas for the previous cell:> * For any postive words, convert the ratio using `np.log(ratio)`> * For any negative words, convert the ratio using `-np.log(1/(ratio + 0.01))`These won't give you the exact same results as the simpler code we show in this notebook, but the values will be similar. In case that second equation looks strange, here's what it's doing: First, it divides one by a very small number, which will produce a larger positive number. Then, it takes the `log` of that, which produces numbers similar to the ones for the postive words. Finally, it negates the values by adding that minus sign up front. The results are extremely positive and extremely negative words having positive-to-negative ratios with similar magnitudes but oppositite signs, just like when we use `np.log(ratio)`. Examine the new ratios you've calculated for the same words from before:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 0.05902269426102881
Pos-to-neg ratio for 'amazing' = 1.3919815802404802
Pos-to-neg ratio for 'terrible' = -1.7291085042663878
###Markdown
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)You should continue to see values similar to the earlier ones we checked – neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.
###Code
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
###Output
_____no_output_____
###Markdown
End of Project 1. Watch the next video to continue with Andrew's next lesson. Transforming Text into Numbers
###Code
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
###Output
_____no_output_____
###Markdown
Project 2: Creating the Input/Output Data**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.htmlsets) named `vocab` that contains every word in the vocabulary.
###Code
vocab = set(total_counts.keys())
###Output
_____no_output_____
###Markdown
Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**
###Code
vocab_size = len(vocab)
print(vocab_size)
###Output
74074
###Markdown
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.
###Code
from IPython.display import Image
Image(filename='sentiment_network_2.png')
###Output
_____no_output_____
###Markdown
**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns.
###Code
layer_0 = np.zeros((1,vocab_size))
###Output
_____no_output_____
###Markdown
Run the following cell. It should display `(1, 74074)`
###Code
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
###Output
_____no_output_____
###Markdown
`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
###Code
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `update_input_layer`. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside `layer_0`.
###Code
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
# count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
###Output
_____no_output_____
###Markdown
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`.
###Code
update_input_layer(reviews[0])
layer_0
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`, depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.
###Code
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
if(label == 'POSITIVE'):
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.
###Code
labels[0]
get_target_for_label(labels[0])
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.
###Code
labels[1]
get_target_for_label(labels[1])
###Output
_____no_output_____
###Markdown
End of Project 2 solution. Watch the next video to continue with Andrew's next lesson. Project 3: Building a Neural Network **TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions- Ensure `train` trains over the entire corpus Where to Get Help if You Need it- Re-watch previous week's Udacity Lectures- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
###Output
_____no_output_____
###Markdown
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). **We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Tested:1 Testing Accuracy:100.%
Progress:0.1% Speed(reviews/sec):17.26 #Correct:1 #Tested:2 Testing Accuracy:50.0%
Progress:0.2% Speed(reviews/sec):33.94 #Correct:2 #Tested:3 Testing Accuracy:66.6%
Progress:0.3% Speed(reviews/sec):50.03 #Correct:2 #Tested:4 Testing Accuracy:50.0%
Progress:0.4% Speed(reviews/sec):65.63 #Correct:3 #Tested:5 Testing Accuracy:60.0%
Progress:0.5% Speed(reviews/sec):80.28 #Correct:3 #Tested:6 Testing Accuracy:50.0%
Progress:0.6% Speed(reviews/sec):94.24 #Correct:4 #Tested:7 Testing Accuracy:57.1%
Progress:0.7% Speed(reviews/sec):107.9 #Correct:4 #Tested:8 Testing Accuracy:50.0%
Progress:0.8% Speed(reviews/sec):121.2 #Correct:5 #Tested:9 Testing Accuracy:55.5%
Progress:0.9% Speed(reviews/sec):131.1 #Correct:5 #Tested:10 Testing Accuracy:50.0%
Progress:1.0% Speed(reviews/sec):142.8 #Correct:6 #Tested:11 Testing Accuracy:54.5%
Progress:1.1% Speed(reviews/sec):154.8 #Correct:6 #Tested:12 Testing Accuracy:50.0%
Progress:1.2% Speed(reviews/sec):165.8 #Correct:7 #Tested:13 Testing Accuracy:53.8%
Progress:1.3% Speed(reviews/sec):176.3 #Correct:7 #Tested:14 Testing Accuracy:50.0%
Progress:1.4% Speed(reviews/sec):186.2 #Correct:8 #Tested:15 Testing Accuracy:53.3%
Progress:1.5% Speed(reviews/sec):196.3 #Correct:8 #Tested:16 Testing Accuracy:50.0%
Progress:1.6% Speed(reviews/sec):204.4 #Correct:9 #Tested:17 Testing Accuracy:52.9%
Progress:1.7% Speed(reviews/sec):213.8 #Correct:9 #Tested:18 Testing Accuracy:50.0%
Progress:1.8% Speed(reviews/sec):223.9 #Correct:10 #Tested:19 Testing Accuracy:52.6%
Progress:1.9% Speed(reviews/sec):232.6 #Correct:10 #Tested:20 Testing Accuracy:50.0%
Progress:2.0% Speed(reviews/sec):240.8 #Correct:11 #Tested:21 Testing Accuracy:52.3%
Progress:2.1% Speed(reviews/sec):249.8 #Correct:11 #Tested:22 Testing Accuracy:50.0%
Progress:2.2% Speed(reviews/sec):258.3 #Correct:12 #Tested:23 Testing Accuracy:52.1%
Progress:2.3% Speed(reviews/sec):266.2 #Correct:12 #Tested:24 Testing Accuracy:50.0%
Progress:2.4% Speed(reviews/sec):267.1 #Correct:13 #Tested:25 Testing Accuracy:52.0%
Progress:2.5% Speed(reviews/sec):273.9 #Correct:13 #Tested:26 Testing Accuracy:50.0%
Progress:2.6% Speed(reviews/sec):281.1 #Correct:14 #Tested:27 Testing Accuracy:51.8%
Progress:2.7% Speed(reviews/sec):285.7 #Correct:14 #Tested:28 Testing Accuracy:50.0%
Progress:2.8% Speed(reviews/sec):293.0 #Correct:15 #Tested:29 Testing Accuracy:51.7%
Progress:2.9% Speed(reviews/sec):299.7 #Correct:15 #Tested:30 Testing Accuracy:50.0%
Progress:3.0% Speed(reviews/sec):305.0 #Correct:16 #Tested:31 Testing Accuracy:51.6%
Progress:3.1% Speed(reviews/sec):311.6 #Correct:16 #Tested:32 Testing Accuracy:50.0%
Progress:3.2% Speed(reviews/sec):318.2 #Correct:17 #Tested:33 Testing Accuracy:51.5%
Progress:3.3% Speed(reviews/sec):323.9 #Correct:17 #Tested:34 Testing Accuracy:50.0%
Progress:3.4% Speed(reviews/sec):329.5 #Correct:18 #Tested:35 Testing Accuracy:51.4%
Progress:3.5% Speed(reviews/sec):335.6 #Correct:18 #Tested:36 Testing Accuracy:50.0%
Progress:3.6% Speed(reviews/sec):339.0 #Correct:19 #Tested:37 Testing Accuracy:51.3%
Progress:3.7% Speed(reviews/sec):344.8 #Correct:19 #Tested:38 Testing Accuracy:50.0%
Progress:3.8% Speed(reviews/sec):349.8 #Correct:20 #Tested:39 Testing Accuracy:51.2%
Progress:3.9% Speed(reviews/sec):354.3 #Correct:20 #Tested:40 Testing Accuracy:50.0%
Progress:4.0% Speed(reviews/sec):359.6 #Correct:21 #Tested:41 Testing Accuracy:51.2%
Progress:4.1% Speed(reviews/sec):364.8 #Correct:21 #Tested:42 Testing Accuracy:50.0%
Progress:4.2% Speed(reviews/sec):365.4 #Correct:22 #Tested:43 Testing Accuracy:51.1%
Progress:4.3% Speed(reviews/sec):368.3 #Correct:22 #Tested:44 Testing Accuracy:50.0%
Progress:4.4% Speed(reviews/sec):368.9 #Correct:23 #Tested:45 Testing Accuracy:51.1%
Progress:4.5% Speed(reviews/sec):362.4 #Correct:23 #Tested:46 Testing Accuracy:50.0%
Progress:4.6% Speed(reviews/sec):361.5 #Correct:24 #Tested:47 Testing Accuracy:51.0%
Progress:4.7% Speed(reviews/sec):366.6 #Correct:24 #Tested:48 Testing Accuracy:50.0%
Progress:4.8% Speed(reviews/sec):370.5 #Correct:25 #Tested:49 Testing Accuracy:51.0%
Progress:4.9% Speed(reviews/sec):375.3 #Correct:25 #Tested:50 Testing Accuracy:50.0%
Progress:5.0% Speed(reviews/sec):380.1 #Correct:26 #Tested:51 Testing Accuracy:50.9%
Progress:5.1% Speed(reviews/sec):385.0 #Correct:26 #Tested:52 Testing Accuracy:50.0%
Progress:5.2% Speed(reviews/sec):388.2 #Correct:27 #Tested:53 Testing Accuracy:50.9%
Progress:5.3% Speed(reviews/sec):392.4 #Correct:27 #Tested:54 Testing Accuracy:50.0%
Progress:5.4% Speed(reviews/sec):396.9 #Correct:28 #Tested:55 Testing Accuracy:50.9%
Progress:5.5% Speed(reviews/sec):400.8 #Correct:28 #Tested:56 Testing Accuracy:50.0%
Progress:5.6% Speed(reviews/sec):401.6 #Correct:29 #Tested:57 Testing Accuracy:50.8%
Progress:5.7% Speed(reviews/sec):405.9 #Correct:29 #Tested:58 Testing Accuracy:50.0%
Progress:5.8% Speed(reviews/sec):408.8 #Correct:30 #Tested:59 Testing Accuracy:50.8%
Progress:5.9% Speed(reviews/sec):413.1 #Correct:30 #Tested:60 Testing Accuracy:50.0%
Progress:6.0% Speed(reviews/sec):415.8 #Correct:31 #Tested:61 Testing Accuracy:50.8%
Progress:6.1% Speed(reviews/sec):419.2 #Correct:31 #Tested:62 Testing Accuracy:50.0%
Progress:6.2% Speed(reviews/sec):423.5 #Correct:32 #Tested:63 Testing Accuracy:50.7%
Progress:6.3% Speed(reviews/sec):427.5 #Correct:32 #Tested:64 Testing Accuracy:50.0%
Progress:6.4% Speed(reviews/sec):429.4 #Correct:33 #Tested:65 Testing Accuracy:50.7%
Progress:6.5% Speed(reviews/sec):433.1 #Correct:33 #Tested:66 Testing Accuracy:50.0%
Progress:6.6% Speed(reviews/sec):436.9 #Correct:34 #Tested:67 Testing Accuracy:50.7%
Progress:6.7% Speed(reviews/sec):440.4 #Correct:34 #Tested:68 Testing Accuracy:50.0%
Progress:6.8% Speed(reviews/sec):443.8 #Correct:35 #Tested:69 Testing Accuracy:50.7%
Progress:6.9% Speed(reviews/sec):442.7 #Correct:35 #Tested:70 Testing Accuracy:50.0%
Progress:7.0% Speed(reviews/sec):441.0 #Correct:36 #Tested:71 Testing Accuracy:50.7%
Progress:7.1% Speed(reviews/sec):444.7 #Correct:36 #Tested:72 Testing Accuracy:50.0%
Progress:7.2% Speed(reviews/sec):448.1 #Correct:37 #Tested:73 Testing Accuracy:50.6%
Progress:7.3% Speed(reviews/sec):451.3 #Correct:37 #Tested:74 Testing Accuracy:50.0%
Progress:7.4% Speed(reviews/sec):454.3 #Correct:38 #Tested:75 Testing Accuracy:50.6%
Progress:7.5% Speed(reviews/sec):457.1 #Correct:38 #Tested:76 Testing Accuracy:50.0%
Progress:7.6% Speed(reviews/sec):460.2 #Correct:39 #Tested:77 Testing Accuracy:50.6%
Progress:7.7% Speed(reviews/sec):463.0 #Correct:39 #Tested:78 Testing Accuracy:50.0%
Progress:7.8% Speed(reviews/sec):465.7 #Correct:40 #Tested:79 Testing Accuracy:50.6%
Progress:7.9% Speed(reviews/sec):468.0 #Correct:40 #Tested:80 Testing Accuracy:50.0%
Progress:8.0% Speed(reviews/sec):470.8 #Correct:41 #Tested:81 Testing Accuracy:50.6%
Progress:8.1% Speed(reviews/sec):468.7 #Correct:41 #Tested:82 Testing Accuracy:50.0%
Progress:8.2% Speed(reviews/sec):469.9 #Correct:42 #Tested:83 Testing Accuracy:50.6%
Progress:8.3% Speed(reviews/sec):471.5 #Correct:42 #Tested:84 Testing Accuracy:50.0%
Progress:8.4% Speed(reviews/sec):474.5 #Correct:43 #Tested:85 Testing Accuracy:50.5%
Progress:8.5% Speed(reviews/sec):477.2 #Correct:43 #Tested:86 Testing Accuracy:50.0%
Progress:8.6% Speed(reviews/sec):479.2 #Correct:44 #Tested:87 Testing Accuracy:50.5%
Progress:8.7% Speed(reviews/sec):482.0 #Correct:44 #Tested:88 Testing Accuracy:50.0%
Progress:8.8% Speed(reviews/sec):485.0 #Correct:45 #Tested:89 Testing Accuracy:50.5%
Progress:8.9% Speed(reviews/sec):485.2 #Correct:45 #Tested:90 Testing Accuracy:50.0%
Progress:9.0% Speed(reviews/sec):487.5 #Correct:46 #Tested:91 Testing Accuracy:50.5%
Progress:9.1% Speed(reviews/sec):490.2 #Correct:46 #Tested:92 Testing Accuracy:50.0%
Progress:9.2% Speed(reviews/sec):492.6 #Correct:47 #Tested:93 Testing Accuracy:50.5%
Progress:9.3% Speed(reviews/sec):489.5 #Correct:47 #Tested:94 Testing Accuracy:50.0%
Progress:9.4% Speed(reviews/sec):489.7 #Correct:48 #Tested:95 Testing Accuracy:50.5%
Progress:9.5% Speed(reviews/sec):491.8 #Correct:48 #Tested:96 Testing Accuracy:50.0%
Progress:9.6% Speed(reviews/sec):493.1 #Correct:49 #Tested:97 Testing Accuracy:50.5%
Progress:9.7% Speed(reviews/sec):494.9 #Correct:49 #Tested:98 Testing Accuracy:50.0%
Progress:9.8% Speed(reviews/sec):497.4 #Correct:50 #Tested:99 Testing Accuracy:50.5%
Progress:9.9% Speed(reviews/sec):499.8 #Correct:50 #Tested:100 Testing Accuracy:50.0%
Progress:10.0% Speed(reviews/sec):502.3 #Correct:51 #Tested:101 Testing Accuracy:50.4%
Progress:10.1% Speed(reviews/sec):504.6 #Correct:51 #Tested:102 Testing Accuracy:50.0%
Progress:10.2% Speed(reviews/sec):506.9 #Correct:52 #Tested:103 Testing Accuracy:50.4%
Progress:10.3% Speed(reviews/sec):508.1 #Correct:52 #Tested:104 Testing Accuracy:50.0%
Progress:10.4% Speed(reviews/sec):510.5 #Correct:53 #Tested:105 Testing Accuracy:50.4%
Progress:10.5% Speed(reviews/sec):511.6 #Correct:53 #Tested:106 Testing Accuracy:50.0%
Progress:10.6% Speed(reviews/sec):512.1 #Correct:54 #Tested:107 Testing Accuracy:50.4%
Progress:10.7% Speed(reviews/sec):513.6 #Correct:54 #Tested:108 Testing Accuracy:50.0%
Progress:10.8% Speed(reviews/sec):515.1 #Correct:55 #Tested:109 Testing Accuracy:50.4%
Progress:10.9% Speed(reviews/sec):515.1 #Correct:55 #Tested:110 Testing Accuracy:50.0%
Progress:11.0% Speed(reviews/sec):515.9 #Correct:56 #Tested:111 Testing Accuracy:50.4%
Progress:11.1% Speed(reviews/sec):516.7 #Correct:56 #Tested:112 Testing Accuracy:50.0%
Progress:11.2% Speed(reviews/sec):516.0 #Correct:57 #Tested:113 Testing Accuracy:50.4%
Progress:11.3% Speed(reviews/sec):515.5 #Correct:57 #Tested:114 Testing Accuracy:50.0%
Progress:11.4% Speed(reviews/sec):516.6 #Correct:58 #Tested:115 Testing Accuracy:50.4%
Progress:11.5% Speed(reviews/sec):516.5 #Correct:58 #Tested:116 Testing Accuracy:50.0%
Progress:11.6% Speed(reviews/sec):512.2 #Correct:59 #Tested:117 Testing Accuracy:50.4%
Progress:11.7% Speed(reviews/sec):512.1 #Correct:59 #Tested:118 Testing Accuracy:50.0%
Progress:11.8% Speed(reviews/sec):513.4 #Correct:60 #Tested:119 Testing Accuracy:50.4%
Progress:11.9% Speed(reviews/sec):514.8 #Correct:60 #Tested:120 Testing Accuracy:50.0%
Progress:12.0% Speed(reviews/sec):516.4 #Correct:61 #Tested:121 Testing Accuracy:50.4%
Progress:12.1% Speed(reviews/sec):516.6 #Correct:61 #Tested:122 Testing Accuracy:50.0%
Progress:12.2% Speed(reviews/sec):518.2 #Correct:62 #Tested:123 Testing Accuracy:50.4%
Progress:12.3% Speed(reviews/sec):516.1 #Correct:62 #Tested:124 Testing Accuracy:50.0%
Progress:12.4% Speed(reviews/sec):514.2 #Correct:63 #Tested:125 Testing Accuracy:50.4%
Progress:12.5% Speed(reviews/sec):514.7 #Correct:63 #Tested:126 Testing Accuracy:50.0%
Progress:12.6% Speed(reviews/sec):516.8 #Correct:64 #Tested:127 Testing Accuracy:50.3%
Progress:12.7% Speed(reviews/sec):518.2 #Correct:64 #Tested:128 Testing Accuracy:50.0%
Progress:12.8% Speed(reviews/sec):520.1 #Correct:65 #Tested:129 Testing Accuracy:50.3%
Progress:12.9% Speed(reviews/sec):522.0 #Correct:65 #Tested:130 Testing Accuracy:50.0%
Progress:13.0% Speed(reviews/sec):523.9 #Correct:66 #Tested:131 Testing Accuracy:50.3%
Progress:13.1% Speed(reviews/sec):525.8 #Correct:66 #Tested:132 Testing Accuracy:50.0%
Progress:13.2% Speed(reviews/sec):527.8 #Correct:67 #Tested:133 Testing Accuracy:50.3%
Progress:13.3% Speed(reviews/sec):529.7 #Correct:67 #Tested:134 Testing Accuracy:50.0%
Progress:13.4% Speed(reviews/sec):531.1 #Correct:68 #Tested:135 Testing Accuracy:50.3%
Progress:13.5% Speed(reviews/sec):532.1 #Correct:68 #Tested:136 Testing Accuracy:50.0%
###Markdown
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
###Code
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):138.6 #Correct:1251 #Trained:2501 Training Accuracy:50.0%
Progress:20.8% Speed(reviews/sec):142.7 #Correct:2501 #Trained:5001 Training Accuracy:50.0%
Progress:31.2% Speed(reviews/sec):140.6 #Correct:3751 #Trained:7501 Training Accuracy:50.0%
Progress:41.6% Speed(reviews/sec):140.9 #Correct:5001 #Trained:10001 Training Accuracy:50.0%
Progress:52.0% Speed(reviews/sec):140.3 #Correct:6251 #Trained:12501 Training Accuracy:50.0%
Progress:62.5% Speed(reviews/sec):138.6 #Correct:7501 #Trained:15001 Training Accuracy:50.0%
Progress:72.9% Speed(reviews/sec):133.7 #Correct:8751 #Trained:17501 Training Accuracy:50.0%
Progress:83.3% Speed(reviews/sec):134.9 #Correct:10001 #Trained:20001 Training Accuracy:50.0%
Progress:93.7% Speed(reviews/sec):135.5 #Correct:11251 #Trained:22501 Training Accuracy:50.0%
Progress:99.9% Speed(reviews/sec):135.7 #Correct:12000 #Trained:24000 Training Accuracy:50.0%
###Markdown
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:9.85% Speed(reviews/sec):130.7 #Correct:1181 #Trained:2367 Training Accuracy:49.8%
###Markdown
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
With a learning rate of `0.001`, the network should finally have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to continue with Andrew's next lesson. Understanding Neural Noise
###Code
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
###Output
_____no_output_____
###Markdown
Project 4: Reducing Noise in Our Input Data**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:* Copy the `SentimentNetwork` class you created earlier into the following cell.* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 4"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
## New for Project 4: changed to set to 1 instead of add 1
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 4 solution. Watch the next video to continue with Andrew's next lesson. Analyzing Inefficiencies in our Network
###Code
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
###Output
_____no_output_____
###Markdown
Project 5: Making our Network More Efficient**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Remove the `update_input_layer` function - you will not need it in this version.* Modify `init_network`:>* You no longer need a separate input layer, so remove any mention of `self.layer_0`>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero* Modify `train`:>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.>* Remove call to `update_input_layer`>* Use `self`'s `layer_1` instead of a local `layer_1` object.>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.* Modify `run`:>* Remove call to `update_input_layer` >* Use `self`'s `layer_1` instead of a local `layer_1` object.>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 5"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it once again.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 5 solution. Watch the next video to continue with Andrew's next lesson. Further Noise Reduction
###Code
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
###Output
_____no_output_____
###Markdown
Project 6: Reducing Noise by Strategically Reducing the Vocabulary**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Modify `pre_process_data`:>* Add two additional parameters: `min_count` and `polarity_cutoff`>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. >* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`* Modify `__init__`:>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data` The following code is the same as the previous project, with project-specific changes marked with `"New for Project 6"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
## New for Project 6: added min_count and polarity_cutoff parameters
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count(int) - Words should only be added to the vocabulary
if they occur more than this many times
polarity_cutoff(float) - The absolute value of a word's positive-to-negative
ratio must be at least this big to be considered.
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
## New for Project 6: added min_count and polarity_cutoff parameters
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
#
## end New for Project 6
## ----------------------------------------
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a small polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a much larger polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 6 solution. Watch the next video to continue with Andrew's next lesson. Analysis: What's Going on in the Weights?
###Code
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
###Output
_____no_output_____
###Markdown
Sentiment Classification & How To "Frame Problems" for a Neural Networkby Andrew Trask- **Twitter**: @iamtrask- **Blog**: http://iamtrask.github.io What You Should Already Know- neural networks, forward and back-propagation- stochastic gradient descent- mean squared error- and train/test splits Where to Get Help if You Need it- Re-watch previous Udacity Lectures- Leverage the recommended Course Reading Material - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) (Check inside your classroom for a discount code)- Shoot me a tweet @iamtrask Tutorial Outline:- Intro: The Importance of "Framing a Problem" (this lesson)- [Curate a Dataset](lesson_1)- [Developing a "Predictive Theory"](lesson_2)- [**PROJECT 1**: Quick Theory Validation](project_1)- [Transforming Text to Numbers](lesson_3)- [**PROJECT 2**: Creating the Input/Output Data](project_2)- Putting it all together in a Neural Network (video only - nothing in notebook)- [**PROJECT 3**: Building our Neural Network](project_3)- [Understanding Neural Noise](lesson_4)- [**PROJECT 4**: Making Learning Faster by Reducing Noise](project_4)- [Analyzing Inefficiencies in our Network](lesson_5)- [**PROJECT 5**: Making our Network Train and Run Faster](project_5)- [Further Noise Reduction](lesson_6)- [**PROJECT 6**: Reducing Noise by Strategically Reducing the Vocabulary](project_6)- [Analysis: What's going on in the weights?](lesson_7) Lesson: Curate a Dataset
###Code
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
###Output
_____no_output_____
###Markdown
**Note:** The data in `reviews.txt` we're using has already been preprocessed a bit and contains only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.
###Code
len(reviews)
reviews[0]
labels[0]
###Output
_____no_output_____
###Markdown
Lesson: Develop a Predictive Theory
###Code
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
###Output
labels.txt : reviews.txt
NEGATIVE : this movie is terrible but it has some good effects . ...
POSITIVE : adrian pasdar is excellent is this film . he makes a fascinating woman . ...
NEGATIVE : comment this movie is impossible . is terrible very improbable bad interpretat...
POSITIVE : excellent episode movie ala pulp fiction . days suicides . it doesnt get more...
NEGATIVE : if you haven t seen this it s terrible . it is pure trash . i saw this about ...
POSITIVE : this schiffer guy is a real genius the movie is of excellent quality and both e...
###Markdown
Project 1: Quick Theory ValidationThere are multiple ways to implement these projects, but in order to get your code closer to what Andrew shows in his solutions, we've provided some hints and starter code throughout this notebook.You'll find the [Counter](https://docs.python.org/2/library/collections.htmlcollections.Counter) class to be useful in this exercise, as well as the [numpy](https://docs.scipy.org/doc/numpy/reference/) library.
###Code
from collections import Counter
import numpy as np
###Output
_____no_output_____
###Markdown
We'll create three `Counter` objects, one for words from postive reviews, one for words from negative reviews, and one for all the words.
###Code
# Create three Counter objects to store positive, negative and total counts
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
###Output
_____no_output_____
###Markdown
**TODO:** Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.**Note:** Throughout these projects, you should use `split(' ')` to divide a piece of text (such as a review) into individual words. If you use `split()` instead, you'll get slightly different results than what the videos and solutions show.
###Code
# Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
###Output
_____no_output_____
###Markdown
Run the following two cells to list the words used in positive reviews and negative reviews, respectively, ordered from most to least commonly used.
###Code
# Examine the counts of the most common words in positive reviews
positive_counts.most_common()
# Examine the counts of the most common words in negative reviews
negative_counts.most_common()
###Output
_____no_output_____
###Markdown
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the **ratios** of word usage between positive and negative reviews.**TODO:** Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in `pos_neg_ratios`. >Hint: the positive-to-negative ratio for a given word can be calculated with `positive_counts[word] / float(negative_counts[word]+1)`. Notice the `+1` in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
###Code
pos_neg_ratios = Counter()
# Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
###Output
_____no_output_____
###Markdown
Examine the ratios you've calculated for a few words:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 1.0607993145235326
Pos-to-neg ratio for 'amazing' = 4.022813688212928
Pos-to-neg ratio for 'terrible' = 0.17744252873563218
###Markdown
Looking closely at the values you just calculated, we see the following: * Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.* Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.* Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The `+1` we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:* Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.* When comparing absolute values it's easier to do that around zero than one. To fix these issues, we'll convert all of our ratios to new values using logarithms.**TODO:** Go through all the ratios you calculated and convert them to logarithms. (i.e. use `np.log(ratio)`)In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but opposite signs.
###Code
# Convert ratios to logs
for word,ratio in pos_neg_ratios.most_common():
pos_neg_ratios[word] = np.log(ratio)
###Output
_____no_output_____
###Markdown
**NOTE:** In the video, Andrew uses the following formulas for the previous cell:> * For any postive words, convert the ratio using `np.log(ratio)`> * For any negative words, convert the ratio using `-np.log(1/(ratio + 0.01))`These won't give you the exact same results as the simpler code we show in this notebook, but the values will be similar. In case that second equation looks strange, here's what it's doing: First, it divides one by a very small number, which will produce a larger positive number. Then, it takes the `log` of that, which produces numbers similar to the ones for the postive words. Finally, it negates the values by adding that minus sign up front. The results are extremely positive and extremely negative words having positive-to-negative ratios with similar magnitudes but oppositite signs, just like when we use `np.log(ratio)`. Examine the new ratios you've calculated for the same words from before:
###Code
print("Pos-to-neg ratio for 'the' = {}".format(pos_neg_ratios["the"]))
print("Pos-to-neg ratio for 'amazing' = {}".format(pos_neg_ratios["amazing"]))
print("Pos-to-neg ratio for 'terrible' = {}".format(pos_neg_ratios["terrible"]))
###Output
Pos-to-neg ratio for 'the' = 0.05902269426102881
Pos-to-neg ratio for 'amazing' = 1.3919815802404802
Pos-to-neg ratio for 'terrible' = -1.7291085042663878
###Markdown
If everything worked, now you should see neutral words with values close to zero. In this case, "the" is near zero but slightly positive, so it was probably used in more positive reviews than negative reviews. But look at "amazing"'s ratio - it's above `1`, showing it is clearly a word with positive sentiment. And "terrible" has a similar score, but in the opposite direction, so it's below `-1`. It's now clear that both of these words are associated with specific, opposing sentiments.Now run the following cells to see more ratios. The first cell displays all the words, ordered by how associated they are with postive reviews. (Your notebook will most likely truncate the output so you won't actually see *all* the words in the list.)The second cell displays the 30 words most associated with negative reviews by reversing the order of the first list and then looking at the first 30 words. (If you want the second cell to display all the words, ordered by how associated they are with negative reviews, you could just write `reversed(pos_neg_ratios.most_common())`.)You should continue to see values similar to the earlier ones we checked – neutral words will be close to `0`, words will get more positive as their ratios approach and go above `1`, and words will get more negative as their ratios approach and go below `-1`. That's why we decided to use the logs instead of the raw ratios.
###Code
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
# Note: Above is the code Andrew uses in his solution video,
# so we've included it here to avoid confusion.
# If you explore the documentation for the Counter class,
# you will see you could also find the 30 least common
# words like this: pos_neg_ratios.most_common()[:-31:-1]
###Output
_____no_output_____
###Markdown
End of Project 1. Watch the next video to continue with Andrew's next lesson. Transforming Text into Numbers
###Code
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
###Output
_____no_output_____
###Markdown
Project 2: Creating the Input/Output Data**TODO:** Create a [set](https://docs.python.org/3/tutorial/datastructures.htmlsets) named `vocab` that contains every word in the vocabulary.
###Code
vocab = set(total_counts.keys())
###Output
_____no_output_____
###Markdown
Run the following cell to check your vocabulary size. If everything worked correctly, it should print **74074**
###Code
vocab_size = len(vocab)
print(vocab_size)
###Output
74074
###Markdown
Take a look at the following image. It represents the layers of the neural network you'll be building throughout this notebook. `layer_0` is the input layer, `layer_1` is a hidden layer, and `layer_2` is the output layer.
###Code
from IPython.display import Image
Image(filename='sentiment_network_2.png')
###Output
_____no_output_____
###Markdown
**TODO:** Create a numpy array called `layer_0` and initialize it to all zeros. You will find the [zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) function particularly helpful here. Be sure you create `layer_0` as a 2-dimensional matrix with 1 row and `vocab_size` columns.
###Code
layer_0 = np.zeros((1,vocab_size))
###Output
_____no_output_____
###Markdown
Run the following cell. It should display `(1, 74074)`
###Code
layer_0.shape
from IPython.display import Image
Image(filename='sentiment_network.png')
###Output
_____no_output_____
###Markdown
`layer_0` contains one entry for every word in the vocabulary, as shown in the above image. We need to make sure we know the index of each word, so run the following cell to create a lookup table that stores the index of every word.
###Code
# Create a dictionary of words in the vocabulary mapped to index positions
# (to be used in layer_0)
word2index = {}
for i,word in enumerate(vocab):
word2index[word] = i
# display the map of words to indices
word2index
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `update_input_layer`. It should count how many times each word is used in the given review, and then store those counts at the appropriate indices inside `layer_0`.
###Code
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
# count how many times each word is used in the given review and store the results in layer_0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
###Output
_____no_output_____
###Markdown
Run the following cell to test updating the input layer with the first review. The indices assigned may not be the same as in the solution, but hopefully you'll see some non-zero values in `layer_0`.
###Code
update_input_layer(reviews[0])
layer_0
###Output
_____no_output_____
###Markdown
**TODO:** Complete the implementation of `get_target_for_labels`. It should return `0` or `1`, depending on whether the given label is `NEGATIVE` or `POSITIVE`, respectively.
###Code
def get_target_for_label(label):
"""Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
"""
if(label == 'POSITIVE'):
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out`'POSITIVE'` and `1`, respectively.
###Code
labels[0]
get_target_for_label(labels[0])
###Output
_____no_output_____
###Markdown
Run the following two cells. They should print out `'NEGATIVE'` and `0`, respectively.
###Code
labels[1]
get_target_for_label(labels[1])
###Output
_____no_output_____
###Markdown
End of Project 2 solution. Watch the next video to continue with Andrew's next lesson. Project 3: Building a Neural Network **TODO:** We've included the framework of a class called `SentimentNetork`. Implement all of the items marked `TODO` in the code. These include doing the following:- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer. - Do **not** add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.- Re-use the code from earlier in this notebook to create the training data (see `TODO`s in the code)- Implement the `pre_process_data` function to create the vocabulary for our training data generating functions- Ensure `train` trains over the entire corpus Where to Get Help if You Need it- Re-watch previous week's Udacity Lectures- Chapters 3-5 - [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning) - (Check inside your classroom for a discount code)
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
self.layer_0[0][self.word2index[word]] += 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to create a `SentimentNetwork` that will train on all but the last 1000 reviews (we're saving those for testing). Here we use a learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
###Output
_____no_output_____
###Markdown
Run the following cell to test the network's performance against the last 1000 reviews (the ones we held out from our training set). **We have not trained the model yet, so the results should be about 50% as it will just be guessing and there are only two possible values to choose from.**
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Tested:1 Testing Accuracy:100.%
Progress:0.1% Speed(reviews/sec):83.56 #Correct:1 #Tested:2 Testing Accuracy:50.0%
Progress:0.2% Speed(reviews/sec):154.2 #Correct:2 #Tested:3 Testing Accuracy:66.6%
Progress:0.3% Speed(reviews/sec):214.8 #Correct:2 #Tested:4 Testing Accuracy:50.0%
Progress:0.4% Speed(reviews/sec):267.4 #Correct:3 #Tested:5 Testing Accuracy:60.0%
Progress:0.5% Speed(reviews/sec):313.3 #Correct:3 #Tested:6 Testing Accuracy:50.0%
Progress:0.6% Speed(reviews/sec):353.9 #Correct:4 #Tested:7 Testing Accuracy:57.1%
Progress:0.7% Speed(reviews/sec):389.9 #Correct:4 #Tested:8 Testing Accuracy:50.0%
Progress:0.8% Speed(reviews/sec):422.2 #Correct:5 #Tested:9 Testing Accuracy:55.5%
Progress:0.9% Speed(reviews/sec):429.7 #Correct:5 #Tested:10 Testing Accuracy:50.0%
Progress:1.0% Speed(reviews/sec):455.7 #Correct:6 #Tested:11 Testing Accuracy:54.5%
Progress:1.1% Speed(reviews/sec):479.5 #Correct:6 #Tested:12 Testing Accuracy:50.0%
Progress:1.2% Speed(reviews/sec):501.3 #Correct:7 #Tested:13 Testing Accuracy:53.8%
Progress:1.3% Speed(reviews/sec):520.8 #Correct:7 #Tested:14 Testing Accuracy:50.0%
Progress:1.4% Speed(reviews/sec):539.9 #Correct:8 #Tested:15 Testing Accuracy:53.3%
Progress:1.5% Speed(reviews/sec):557.0 #Correct:8 #Tested:16 Testing Accuracy:50.0%
Progress:1.6% Speed(reviews/sec):553.2 #Correct:9 #Tested:17 Testing Accuracy:52.9%
Progress:1.7% Speed(reviews/sec):568.2 #Correct:9 #Tested:18 Testing Accuracy:50.0%
Progress:1.8% Speed(reviews/sec):582.2 #Correct:10 #Tested:19 Testing Accuracy:52.6%
Progress:1.9% Speed(reviews/sec):595.3 #Correct:10 #Tested:20 Testing Accuracy:50.0%
Progress:2.0% Speed(reviews/sec):589.4 #Correct:11 #Tested:21 Testing Accuracy:52.3%
Progress:2.1% Speed(reviews/sec):601.2 #Correct:11 #Tested:22 Testing Accuracy:50.0%
Progress:2.2% Speed(reviews/sec):612.3 #Correct:12 #Tested:23 Testing Accuracy:52.1%
Progress:2.3% Speed(reviews/sec):606.5 #Correct:12 #Tested:24 Testing Accuracy:50.0%
Progress:2.4% Speed(reviews/sec):616.6 #Correct:13 #Tested:25 Testing Accuracy:52.0%
Progress:2.5% Speed(reviews/sec):611.4 #Correct:13 #Tested:26 Testing Accuracy:50.0%
Progress:2.6% Speed(reviews/sec):620.6 #Correct:14 #Tested:27 Testing Accuracy:51.8%
Progress:2.7% Speed(reviews/sec):601.6 #Correct:14 #Tested:28 Testing Accuracy:50.0%
Progress:2.8% Speed(reviews/sec):597.3 #Correct:15 #Tested:29 Testing Accuracy:51.7%
Progress:2.9% Speed(reviews/sec):593.4 #Correct:15 #Tested:30 Testing Accuracy:50.0%
Progress:3.0% Speed(reviews/sec):589.8 #Correct:16 #Tested:31 Testing Accuracy:51.6%
Progress:3.1% Speed(reviews/sec):597.7 #Correct:16 #Tested:32 Testing Accuracy:50.0%
Progress:3.2% Speed(reviews/sec):605.3 #Correct:17 #Tested:33 Testing Accuracy:51.5%
Progress:3.3% Speed(reviews/sec):612.7 #Correct:17 #Tested:34 Testing Accuracy:50.0%
Progress:3.4% Speed(reviews/sec):619.8 #Correct:18 #Tested:35 Testing Accuracy:51.4%
Progress:3.5% Speed(reviews/sec):615.6 #Correct:18 #Tested:36 Testing Accuracy:50.0%
Progress:3.6% Speed(reviews/sec):611.8 #Correct:19 #Tested:37 Testing Accuracy:51.3%
Progress:3.7% Speed(reviews/sec):618.3 #Correct:19 #Tested:38 Testing Accuracy:50.0%
Progress:3.8% Speed(reviews/sec):614.5 #Correct:20 #Tested:39 Testing Accuracy:51.2%
Progress:3.9% Speed(reviews/sec):620.7 #Correct:20 #Tested:40 Testing Accuracy:50.0%
Progress:4.0% Speed(reviews/sec):617.0 #Correct:21 #Tested:41 Testing Accuracy:51.2%
Progress:4.1% Speed(reviews/sec):622.8 #Correct:21 #Tested:42 Testing Accuracy:50.0%
Progress:4.2% Speed(reviews/sec):628.5 #Correct:22 #Tested:43 Testing Accuracy:51.1%
Progress:4.3% Speed(reviews/sec):624.8 #Correct:22 #Tested:44 Testing Accuracy:50.0%
Progress:4.4% Speed(reviews/sec):621.3 #Correct:23 #Tested:45 Testing Accuracy:51.1%
Progress:4.5% Speed(reviews/sec):618.0 #Correct:23 #Tested:46 Testing Accuracy:50.0%
Progress:4.6% Speed(reviews/sec):623.2 #Correct:24 #Tested:47 Testing Accuracy:51.0%
Progress:4.7% Speed(reviews/sec):628.3 #Correct:24 #Tested:48 Testing Accuracy:50.0%
Progress:4.8% Speed(reviews/sec):625.0 #Correct:25 #Tested:49 Testing Accuracy:51.0%
Progress:4.9% Speed(reviews/sec):629.8 #Correct:25 #Tested:50 Testing Accuracy:50.0%
Progress:5.0% Speed(reviews/sec):634.6 #Correct:26 #Tested:51 Testing Accuracy:50.9%
Progress:5.1% Speed(reviews/sec):639.2 #Correct:26 #Tested:52 Testing Accuracy:50.0%
Progress:5.2% Speed(reviews/sec):628.1 #Correct:27 #Tested:53 Testing Accuracy:50.9%
Progress:5.3% Speed(reviews/sec):632.6 #Correct:27 #Tested:54 Testing Accuracy:50.0%
Progress:5.4% Speed(reviews/sec):636.9 #Correct:28 #Tested:55 Testing Accuracy:50.9%
Progress:5.5% Speed(reviews/sec):641.2 #Correct:28 #Tested:56 Testing Accuracy:50.0%
Progress:5.6% Speed(reviews/sec):638.0 #Correct:29 #Tested:57 Testing Accuracy:50.8%
Progress:5.7% Speed(reviews/sec):642.1 #Correct:29 #Tested:58 Testing Accuracy:50.0%
Progress:5.8% Speed(reviews/sec):646.1 #Correct:30 #Tested:59 Testing Accuracy:50.8%
Progress:5.9% Speed(reviews/sec):650.0 #Correct:30 #Tested:60 Testing Accuracy:50.0%
Progress:6.0% Speed(reviews/sec):653.9 #Correct:31 #Tested:61 Testing Accuracy:50.8%
Progress:6.1% Speed(reviews/sec):657.6 #Correct:31 #Tested:62 Testing Accuracy:50.0%
Progress:6.2% Speed(reviews/sec):661.3 #Correct:32 #Tested:63 Testing Accuracy:50.7%
Progress:6.3% Speed(reviews/sec):664.9 #Correct:32 #Tested:64 Testing Accuracy:50.0%
Progress:6.4% Speed(reviews/sec):661.5 #Correct:33 #Tested:65 Testing Accuracy:50.7%
Progress:6.5% Speed(reviews/sec):665.0 #Correct:33 #Tested:66 Testing Accuracy:50.0%
Progress:6.6% Speed(reviews/sec):661.7 #Correct:34 #Tested:67 Testing Accuracy:50.7%
Progress:6.7% Speed(reviews/sec):665.1 #Correct:34 #Tested:68 Testing Accuracy:50.0%
Progress:6.8% Speed(reviews/sec):668.4 #Correct:35 #Tested:69 Testing Accuracy:50.7%
Progress:6.9% Speed(reviews/sec):671.7 #Correct:35 #Tested:70 Testing Accuracy:50.0%
Progress:7.0% Speed(reviews/sec):668.4 #Correct:36 #Tested:71 Testing Accuracy:50.7%
Progress:7.1% Speed(reviews/sec):678.0 #Correct:36 #Tested:72 Testing Accuracy:50.0%
Progress:7.2% Speed(reviews/sec):681.0 #Correct:37 #Tested:73 Testing Accuracy:50.6%
Progress:7.3% Speed(reviews/sec):684.0 #Correct:37 #Tested:74 Testing Accuracy:50.0%
Progress:7.4% Speed(reviews/sec):687.0 #Correct:38 #Tested:75 Testing Accuracy:50.6%
Progress:7.5% Speed(reviews/sec):689.9 #Correct:38 #Tested:76 Testing Accuracy:50.0%
Progress:7.6% Speed(reviews/sec):692.7 #Correct:39 #Tested:77 Testing Accuracy:50.6%
Progress:7.7% Speed(reviews/sec):695.5 #Correct:39 #Tested:78 Testing Accuracy:50.0%
Progress:7.8% Speed(reviews/sec):692.0 #Correct:40 #Tested:79 Testing Accuracy:50.6%
Progress:7.9% Speed(reviews/sec):694.8 #Correct:40 #Tested:80 Testing Accuracy:50.0%
Progress:8.0% Speed(reviews/sec):691.5 #Correct:41 #Tested:81 Testing Accuracy:50.6%
Progress:8.1% Speed(reviews/sec):682.4 #Correct:41 #Tested:82 Testing Accuracy:50.0%
Progress:8.2% Speed(reviews/sec):673.9 #Correct:42 #Tested:83 Testing Accuracy:50.6%
Progress:8.3% Speed(reviews/sec):676.5 #Correct:42 #Tested:84 Testing Accuracy:50.0%
Progress:8.4% Speed(reviews/sec):679.2 #Correct:43 #Tested:85 Testing Accuracy:50.5%
Progress:8.5% Speed(reviews/sec):676.4 #Correct:43 #Tested:86 Testing Accuracy:50.0%
Progress:8.6% Speed(reviews/sec):673.6 #Correct:44 #Tested:87 Testing Accuracy:50.5%
Progress:8.7% Speed(reviews/sec):676.1 #Correct:44 #Tested:88 Testing Accuracy:50.0%
Progress:8.8% Speed(reviews/sec):673.5 #Correct:45 #Tested:89 Testing Accuracy:50.5%
Progress:8.9% Speed(reviews/sec):670.9 #Correct:45 #Tested:90 Testing Accuracy:50.0%
Progress:9.0% Speed(reviews/sec):668.4 #Correct:46 #Tested:91 Testing Accuracy:50.5%
Progress:9.1% Speed(reviews/sec):670.9 #Correct:46 #Tested:92 Testing Accuracy:50.0%
Progress:9.2% Speed(reviews/sec):673.3 #Correct:47 #Tested:93 Testing Accuracy:50.5%
Progress:9.3% Speed(reviews/sec):675.7 #Correct:47 #Tested:94 Testing Accuracy:50.0%
Progress:9.4% Speed(reviews/sec):678.0 #Correct:48 #Tested:95 Testing Accuracy:50.5%
Progress:9.5% Speed(reviews/sec):675.5 #Correct:48 #Tested:96 Testing Accuracy:50.0%
Progress:9.6% Speed(reviews/sec):663.8 #Correct:49 #Tested:97 Testing Accuracy:50.5%
Progress:9.7% Speed(reviews/sec):644.1 #Correct:49 #Tested:98 Testing Accuracy:50.0%
Progress:9.8% Speed(reviews/sec):633.9 #Correct:50 #Tested:99 Testing Accuracy:50.5%
Progress:9.9% Speed(reviews/sec):636.3 #Correct:50 #Tested:100 Testing Accuracy:50.0%
Progress:10.0% Speed(reviews/sec):634.6 #Correct:51 #Tested:101 Testing Accuracy:50.4%
Progress:10.1% Speed(reviews/sec):636.9 #Correct:51 #Tested:102 Testing Accuracy:50.0%
Progress:10.2% Speed(reviews/sec):635.2 #Correct:52 #Tested:103 Testing Accuracy:50.4%
Progress:10.3% Speed(reviews/sec):629.7 #Correct:52 #Tested:104 Testing Accuracy:50.0%
Progress:10.4% Speed(reviews/sec):628.1 #Correct:53 #Tested:105 Testing Accuracy:50.4%
Progress:10.5% Speed(reviews/sec):626.6 #Correct:53 #Tested:106 Testing Accuracy:50.0%
Progress:10.6% Speed(reviews/sec):617.9 #Correct:54 #Tested:107 Testing Accuracy:50.4%
Progress:10.7% Speed(reviews/sec):616.5 #Correct:54 #Tested:108 Testing Accuracy:50.0%
Progress:10.8% Speed(reviews/sec):615.2 #Correct:55 #Tested:109 Testing Accuracy:50.4%
Progress:10.9% Speed(reviews/sec):610.5 #Correct:55 #Tested:110 Testing Accuracy:50.0%
Progress:11.0% Speed(reviews/sec):612.7 #Correct:56 #Tested:111 Testing Accuracy:50.4%
Progress:11.1% Speed(reviews/sec):611.5 #Correct:56 #Tested:112 Testing Accuracy:50.0%
Progress:11.2% Speed(reviews/sec):610.3 #Correct:57 #Tested:113 Testing Accuracy:50.4%
Progress:11.3% Speed(reviews/sec):609.1 #Correct:57 #Tested:114 Testing Accuracy:50.0%
Progress:11.4% Speed(reviews/sec):608.0 #Correct:58 #Tested:115 Testing Accuracy:50.4%
Progress:11.5% Speed(reviews/sec):610.0 #Correct:58 #Tested:116 Testing Accuracy:50.0%
###Markdown
Run the following cell to actually train the network. During training, it will display the model's accuracy repeatedly as it trains so you can see how well it's doing.
###Code
mlp.train(reviews[:-1000],labels[:-1000])
###Output
Progress:0.0% Speed(reviews/sec):0.0 #Correct:1 #Trained:1 Training Accuracy:100.%
Progress:10.4% Speed(reviews/sec):117.4 #Correct:1251 #Trained:2501 Training Accuracy:50.0%
Progress:20.8% Speed(reviews/sec):116.3 #Correct:2501 #Trained:5001 Training Accuracy:50.0%
Progress:31.2% Speed(reviews/sec):109.0 #Correct:3751 #Trained:7501 Training Accuracy:50.0%
Progress:41.6% Speed(reviews/sec):114.5 #Correct:5001 #Trained:10001 Training Accuracy:50.0%
Progress:52.0% Speed(reviews/sec):115.0 #Correct:6251 #Trained:12501 Training Accuracy:50.0%
Progress:62.5% Speed(reviews/sec):116.0 #Correct:7501 #Trained:15001 Training Accuracy:50.0%
Progress:72.9% Speed(reviews/sec):116.3 #Correct:8751 #Trained:17501 Training Accuracy:50.0%
Progress:75.1% Speed(reviews/sec):116.5 #Correct:9013 #Trained:18025 Training Accuracy:50.0%
###Markdown
That most likely didn't train very well. Part of the reason may be because the learning rate is too high. Run the following cell to recreate the network with a smaller learning rate, `0.01`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
That probably wasn't much different. Run the following cell to recreate the network one more time with an even smaller learning rate, `0.001`, and then train the new network.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.001)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
With a learning rate of `0.001`, the network should finally have started to improve during training. It's still not very good, but it shows that this solution has potential. We will improve it in the next lesson. End of Project 3. Watch the next video to continue with Andrew's next lesson. Understanding Neural Noise
###Code
from IPython.display import Image
Image(filename='sentiment_network.png')
def update_input_layer(review):
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
for word in review.split(" "):
layer_0[0][word2index[word]] += 1
update_input_layer(reviews[0])
layer_0
review_counter = Counter()
for word in reviews[0].split(" "):
review_counter[word] += 1
review_counter.most_common()
###Output
_____no_output_____
###Markdown
Project 4: Reducing Noise in Our Input Data**TODO:** Attempt to reduce the noise in the input data like Andrew did in the previous video. Specifically, do the following:* Copy the `SentimentNetwork` class you created earlier into the following cell.* Modify `update_input_layer` so it does not count how many times each word is used, but rather just stores whether or not a word was used. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 4"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
# The input layer, a two-dimensional matrix with shape 1 x input_nodes
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# clear out previous state, reset the layer to be all 0s
self.layer_0 *= 0
for word in review.split(" "):
# NOTE: This if-check was not in the version of this method created in Project 2,
# and it appears in Andrew's Project 3 solution without explanation.
# It simply ensures the word is actually a key in word2index before
# accessing it, which is important because accessing an invalid key
# with raise an exception in Python. This allows us to ignore unknown
# words encountered in new reviews.
if(word in self.word2index.keys()):
## New for Project 4: changed to set to 1 instead of add 1
self.layer_0[0][self.word2index[word]] = 1
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
# Input Layer
self.update_input_layer(review)
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
self.weights_1_2 -= layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
self.weights_0_1 -= self.layer_0.T.dot(layer_1_delta) * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
# Input Layer
self.update_input_layer(review.lower())
# Hidden layer
layer_1 = self.layer_0.dot(self.weights_0_1)
# Output layer
layer_2 = self.sigmoid(layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it. Notice we've gone back to the higher learning rate of `0.1`.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 4 solution. Watch the next video to continue with Andrew's next lesson. Analyzing Inefficiencies in our Network
###Code
Image(filename='sentiment_network_sparse.png')
layer_0 = np.zeros(10)
layer_0
layer_0[4] = 1
layer_0[9] = 1
layer_0
weights_0_1 = np.random.randn(10,5)
layer_0.dot(weights_0_1)
indices = [4,9]
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (1 * weights_0_1[index])
layer_1
Image(filename='sentiment_network_sparse_2.png')
layer_1 = np.zeros(5)
for index in indices:
layer_1 += (weights_0_1[index])
layer_1
###Output
_____no_output_____
###Markdown
Project 5: Making our Network More Efficient**TODO:** Make the `SentimentNetwork` class more efficient by eliminating unnecessary multiplications and additions that occur during forward and backward propagation. To do that, you can do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Remove the `update_input_layer` function - you will not need it in this version.* Modify `init_network`:>* You no longer need a separate input layer, so remove any mention of `self.layer_0`>* You will be dealing with the old hidden layer more directly, so create `self.layer_1`, a two-dimensional matrix with shape 1 x hidden_nodes, with all values initialized to zero* Modify `train`:>* Change the name of the input parameter `training_reviews` to `training_reviews_raw`. This will help with the next step.>* At the beginning of the function, you'll want to preprocess your reviews to convert them to a list of indices (from `word2index`) that are actually used in the review. This is equivalent to what you saw in the video when Andrew set specific indices to 1. Your code should create a local `list` variable named `training_reviews` that should contain a `list` for each review in `training_reviews_raw`. Those lists should contain the indices for words found in the review.>* Remove call to `update_input_layer`>* Use `self`'s `layer_1` instead of a local `layer_1` object.>* In the forward pass, replace the code that updates `layer_1` with new logic that only adds the weights for the indices used in the review.>* When updating `weights_0_1`, only update the individual weights that were used in the forward pass.* Modify `run`:>* Remove call to `update_input_layer` >* Use `self`'s `layer_1` instead of a local `layer_1` object.>* Much like you did in `train`, you will need to pre-process the `review` so you can work with word indices, then update `layer_1` by adding weights for the indices used in the review. The following code is the same as the previous project, with project-specific changes marked with `"New for Project 5"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews,labels,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to recreate the network and train it once again.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000], learning_rate=0.1)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
That should have trained much better than the earlier attempts. Run the following cell to test your model with 1000 predictions.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 5 solution. Watch the next video to continue with Andrew's next lesson. Further Noise Reduction
###Code
Image(filename='sentiment_network_sparse_2.png')
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
from bokeh.models import ColumnDataSource, LabelSet
from bokeh.plotting import figure, show, output_file
from bokeh.io import output_notebook
output_notebook()
hist, edges = np.histogram(list(map(lambda x:x[1],pos_neg_ratios.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="Word Positive/Negative Affinity Distribution")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
frequency_frequency = Counter()
for word, cnt in total_counts.most_common():
frequency_frequency[cnt] += 1
hist, edges = np.histogram(list(map(lambda x:x[1],frequency_frequency.most_common())), density=True, bins=100, normed=True)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="The frequency distribution of the words in our corpus")
p.quad(top=hist, bottom=0, left=edges[:-1], right=edges[1:], line_color="#555555")
show(p)
###Output
_____no_output_____
###Markdown
Project 6: Reducing Noise by Strategically Reducing the Vocabulary**TODO:** Improve `SentimentNetwork`'s performance by reducing more noise in the vocabulary. Specifically, do the following:* Copy the `SentimentNetwork` class from the previous project into the following cell.* Modify `pre_process_data`:>* Add two additional parameters: `min_count` and `polarity_cutoff`>* Calculate the positive-to-negative ratios of words used in the reviews. (You can use code you've written elsewhere in the notebook, but we are moving it into the class like we did with other helper code earlier.)>* Andrew's solution only calculates a postive-to-negative ratio for words that occur at least 50 times. This keeps the network from attributing too much sentiment to rarer words. You can choose to add this to your solution if you would like. >* Change so words are only added to the vocabulary if they occur in the vocabulary more than `min_count` times.>* Change so words are only added to the vocabulary if the absolute value of their postive-to-negative ratio is at least `polarity_cutoff`* Modify `__init__`:>* Add the same two parameters (`min_count` and `polarity_cutoff`) and use them when you call `pre_process_data` The following code is the same as the previous project, with project-specific changes marked with `"New for Project 6"`
###Code
import time
import sys
import numpy as np
# Encapsulate our neural network in a class
class SentimentNetwork:
## New for Project 6: added min_count and polarity_cutoff parameters
def __init__(self, reviews,labels,min_count = 10,polarity_cutoff = 0.1,hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
min_count(int) - Words should only be added to the vocabulary
if they occur more than this many times
polarity_cutoff(float) - The absolute value of a word's positive-to-negative
ratio must be at least this big to be considered.
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
## New for Project 6: added min_count and polarity_cutoff arguments to pre_process_data call
self.pre_process_data(reviews, labels, polarity_cutoff, min_count)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
## New for Project 6: added min_count and polarity_cutoff parameters
def pre_process_data(self, reviews, labels, polarity_cutoff, min_count):
## ----------------------------------------
## New for Project 6: Calculate positive-to-negative ratios for words before
# building vocabulary
#
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt >= 50):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio + 0.01)))
#
## end New for Project 6
## ----------------------------------------
# populate review_vocab with all of the words in the given reviews
review_vocab = set()
for review in reviews:
for word in review.split(" "):
## New for Project 6: only add words that occur at least min_count times
# and for words with pos/neg ratios, only add words
# that meet the polarity_cutoff
if(total_counts[word] > min_count):
if(word in pos_neg_ratios.keys()):
if((pos_neg_ratios[word] >= polarity_cutoff) or (pos_neg_ratios[word] <= -polarity_cutoff)):
review_vocab.add(word)
else:
review_vocab.add(word)
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
# populate label_vocab with all of the words in the given labels.
label_vocab = set()
for label in labels:
label_vocab.add(label)
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i, word in enumerate(self.review_vocab):
self.word2index[word] = i
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i, label in enumerate(self.label_vocab):
self.label2index[label] = i
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# These are the weights between the input layer and the hidden layer.
self.weights_0_1 = np.zeros((self.input_nodes,self.hidden_nodes))
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(0.0, self.output_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
## New for Project 5: Removed self.layer_0; added self.layer_1
# The input layer, a two-dimensional matrix with shape 1 x hidden_nodes
self.layer_1 = np.zeros((1,hidden_nodes))
## New for Project 5: Removed update_input_layer function
def get_target_for_label(self,label):
if(label == 'POSITIVE'):
return 1
else:
return 0
def sigmoid(self,x):
return 1 / (1 + np.exp(-x))
def sigmoid_output_2_derivative(self,output):
return output * (1 - output)
## New for Project 5: changed name of first parameter form 'training_reviews'
# to 'training_reviews_raw'
def train(self, training_reviews_raw, training_labels):
## New for Project 5: pre-process training reviews so we can deal
# directly with the indices of non-zero inputs
training_reviews = list()
for review in training_reviews_raw:
indices = set()
for word in review.split(" "):
if(word in self.word2index.keys()):
indices.add(self.word2index[word])
training_reviews.append(list(indices))
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# Get the next review and its correct label
review = training_reviews[i]
label = training_labels[i]
#### Implement the forward pass here ####
### Forward pass ###
## New for Project 5: Removed call to 'update_input_layer' function
# because 'layer_0' is no longer used
# Hidden layer
## New for Project 5: Add in only the weights for non-zero items
self.layer_1 *= 0
for index in review:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use 'self.layer_1' instead of 'local layer_1'
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
#### Implement the backward pass here ####
### Backward pass ###
# Output error
layer_2_error = layer_2 - self.get_target_for_label(label) # Output layer error is the difference between desired target and actual output.
layer_2_delta = layer_2_error * self.sigmoid_output_2_derivative(layer_2)
# Backpropagated error
layer_1_error = layer_2_delta.dot(self.weights_1_2.T) # errors propagated to the hidden layer
layer_1_delta = layer_1_error # hidden layer gradients - no nonlinearity so it's the same as the error
# Update the weights
## New for Project 5: changed to use 'self.layer_1' instead of local 'layer_1'
self.weights_1_2 -= self.layer_1.T.dot(layer_2_delta) * self.learning_rate # update hidden-to-output weights with gradient descent step
## New for Project 5: Only update the weights that were used in the forward pass
for index in review:
self.weights_0_1[index] -= layer_1_delta[0] * self.learning_rate # update input-to-hidden weights with gradient descent step
# Keep track of correct predictions.
if(layer_2 >= 0.5 and label == 'POSITIVE'):
correct_so_far += 1
elif(layer_2 < 0.5 and label == 'NEGATIVE'):
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# Run a forward pass through the network, like in the "train" function.
## New for Project 5: Removed call to update_input_layer function
# because layer_0 is no longer used
# Hidden layer
## New for Project 5: Identify the indices used in the review and then add
# just those weights to layer_1
self.layer_1 *= 0
unique_indices = set()
for word in review.lower().split(" "):
if word in self.word2index.keys():
unique_indices.add(self.word2index[word])
for index in unique_indices:
self.layer_1 += self.weights_0_1[index]
# Output layer
## New for Project 5: changed to use self.layer_1 instead of local layer_1
layer_2 = self.sigmoid(self.layer_1.dot(self.weights_1_2))
# Return POSITIVE for values above greater-than-or-equal-to 0.5 in the output layer;
# return NEGATIVE for other values
if(layer_2[0] >= 0.5):
return "POSITIVE"
else:
return "NEGATIVE"
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a small polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.05,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
Run the following cell to train your network with a much larger polarity cutoff.
###Code
mlp = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=20,polarity_cutoff=0.8,learning_rate=0.01)
mlp.train(reviews[:-1000],labels[:-1000])
###Output
_____no_output_____
###Markdown
And run the following cell to test it's performance.
###Code
mlp.test(reviews[-1000:],labels[-1000:])
###Output
_____no_output_____
###Markdown
End of Project 6 solution. Watch the next video to continue with Andrew's next lesson. Analysis: What's Going on in the Weights?
###Code
mlp_full = SentimentNetwork(reviews[:-1000],labels[:-1000],min_count=0,polarity_cutoff=0,learning_rate=0.01)
mlp_full.train(reviews[:-1000],labels[:-1000])
Image(filename='sentiment_network_sparse.png')
def get_most_similar_words(focus = "horrible"):
most_similar = Counter()
for word in mlp_full.word2index.keys():
most_similar[word] = np.dot(mlp_full.weights_0_1[mlp_full.word2index[word]],mlp_full.weights_0_1[mlp_full.word2index[focus]])
return most_similar.most_common()
get_most_similar_words("excellent")
get_most_similar_words("terrible")
import matplotlib.colors as colors
words_to_visualize = list()
for word, ratio in pos_neg_ratios.most_common(500):
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
for word, ratio in list(reversed(pos_neg_ratios.most_common()))[0:500]:
if(word in mlp_full.word2index.keys()):
words_to_visualize.append(word)
pos = 0
neg = 0
colors_list = list()
vectors_list = list()
for word in words_to_visualize:
if word in pos_neg_ratios.keys():
vectors_list.append(mlp_full.weights_0_1[mlp_full.word2index[word]])
if(pos_neg_ratios[word] > 0):
pos+=1
colors_list.append("#00ff00")
else:
neg+=1
colors_list.append("#000000")
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, random_state=0)
words_top_ted_tsne = tsne.fit_transform(vectors_list)
p = figure(tools="pan,wheel_zoom,reset,save",
toolbar_location="above",
title="vector T-SNE for most polarized words")
source = ColumnDataSource(data=dict(x1=words_top_ted_tsne[:,0],
x2=words_top_ted_tsne[:,1],
names=words_to_visualize,
color=colors_list))
p.scatter(x="x1", y="x2", size=8, source=source, fill_color="color")
word_labels = LabelSet(x="x1", y="x2", text="names", y_offset=6,
text_font_size="8pt", text_color="#555555",
source=source, text_align='center')
p.add_layout(word_labels)
show(p)
# green indicates positive words, black indicates negative words
###Output
_____no_output_____ |
image_processing/Handwritten_Digits_Detection.ipynb | ###Markdown
Handwritten Digits Detection---Marco Sanguineti, November 2021
###Code
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
print(tf.__version__)
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
try:
!nvidia-smi
except Exception as e:
print(e)
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train[0].shape
fig, ax = plt.subplots(nrows=3, ncols=3, figsize=(10, 10))
for i in range(3):
for j in range(3):
ax[i, j].imshow(x_train[i + j], cmap="gray")
plt.show()
x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)
images = tf.keras.layers.Input(shape=x_train[0].shape)
x = tf.keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1))(images)
x = tf.keras.layers.Conv2D(16, (3, 3))(x)
x = tf.keras.layers.MaxPooling2D()(x)
x = tf.keras.layers.Conv2D(32, (3, 3))(x)
x = tf.keras.layers.MaxPooling2D()(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(10, activation=tf.keras.activations.softmax)(x)
model = tf.keras.models.Model(images, x)
model.summary()
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), loss="sparse_categorical_crossentropy", metrics=["accuracy"])
import time
start_time = time.time()
history = model.fit(x=x_train,
y=y_train,
epochs=5,
batch_size=256)
print("\n--- %s seconds ---" % (time.time() - start_time))
test_loss, test_acc = model.evaluate(x=x_test, y=y_test)
print(f'\nTest accuracy: {test_acc}')
predictions = model.predict([x_test])
fig, ax = plt.subplots(nrows=3, ncols=3, figsize=(10, 10))
for i in range(3):
for j in range(3):
ax[i, j].imshow(x_test[i + j], cmap="gray")
ax[i, j].set_title(f'True: {y_test[i + j]} - Predicted: {np.argmax(predictions[i + j])}')
plt.show()
try:
!nvidia-smi
except Exception as e:
print(e)
###Output
Sat Nov 6 09:29:43 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 495.44 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 69C P0 82W / 149W | 769MiB / 11441MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
|
python/oop_google_cert_practice_py.ipynb | ###Markdown
**Google Certificate - IT Automation with Python.****Quiz Practice: Object Oriented Programming (OOP)** 1. Want to give this a go? Fill in the blanks in the code to make it print a poem.
###Code
class Flower:
color = 'unknown'
rose = Flower()
rose.color = "red"
violet = Flower()
violet.color = "blue"
this_pun_is_for_you = "Love flowers!!!"
print("Roses are {},".format(rose.color))
print("violets are {},".format(violet.color))
print(this_pun_is_for_you)
###Output
Roses are red,
violets are blue,
Love flowers!!!
###Markdown
**Question 1**Let’s test your knowledge of using dot notation to access methods and attributes in an object. Let’s say we have a class called Birds. Birds has two attributes: color and number. Birds also has a method called count() that counts the number of birds (adds a value to number). Which of the following lines of code will correctly print the number of birds? Keep in mind, the number of birds is 0 until they are counted!
###Code
# The answer is
bluejay.count()
print(bluejay.number)
###Output
_____no_output_____
###Markdown
**Question 2**Creating new instances of class objects can be a great way to keep track of values using attributes associated with the object. The values of these attributes can be easily changed at the object level. The following code illustrates a famous quote by George Bernard Shaw, using objects to represent people. Fill in the blanks to make the code satisfy the behavior described in the quote.
###Code
# “If you have an apple and I have an apple and we exchange these apples then
# you and I will still each have one apple. But if you have an idea and I have
# an idea and we exchange these ideas, then each of us will have two ideas.”
# George Bernard Shaw
class Person:
apples = 0
ideas = 0
johanna = Person()
johanna.apples = 1
johanna.ideas = 1
martin = Person()
martin.apples = 2
martin.ideas = 1
def exchange_apples(you, me):
#Here, despite G.B. Shaw's quote, our characters have started with #different amounts of apples so we can better observe the results.
#We're going to have Martin and Johanna exchange ALL their apples with #one another.
#Hint: how would you switch values of variables,
#so that "you" and "me" will exchange ALL their apples with one another?
#Do you need a temporary variable to store one of the values?
#You may need more than one line of code to do that, which is OK.
apple_change = you.apples
you.apples = me.apples
me.apples = apple_change
return you.apples, me.apples
def exchange_ideas(you, me):
#"you" and "me" will share our ideas with one another.
#What operations need to be performed, so that each object receives
#the shared number of ideas?
#Hint: how would you assign the total number of ideas to
#each idea attribute? Do you need a temporary variable to store
#the sum of ideas, or can you find another way?
#Use as many lines of code as you need here.
you.ideas += me.ideas
me.ideas = you.ideas
return you.ideas, me.ideas
exchange_apples(johanna, martin)
print("Johanna has {} apples and Martin has {} apples".format(johanna.apples, martin.apples))
exchange_ideas(johanna, martin)
print("Johanna has {} ideas and Martin has {} ideas".format(johanna.ideas, martin.ideas))
###Output
Johanna has 2 apples and Martin has 1 apples
Johanna has 2 ideas and Martin has 2 ideas
###Markdown
**Question 3**The City class has the following attributes: name, country (where the city is located), elevation (measured in meters), and population (approximate, according to recent statistics). Fill in the blanks of the max_elevation_city function to return the name of the city and its country (separated by a comma), when comparing the 3 defined instances for a specified minimal population. For example, calling the function for a minimum population of 1 million: max_elevation_city(1000000) should return "Sofia, Bulgaria".
###Code
# define a basic city class
class City:
name = ""
country = ""
elevation = 0
population = 0
# create a new instance of the City class and
# define each attribute
city1 = City()
city1.name = "Cusco"
city1.country = "Peru"
city1.elevation = 3399
city1.population = 358052
# create a new instance of the City class and
# define each attribute
city2 = City()
city2.name = "Sofia"
city2.country = "Bulgaria"
city2.elevation = 2290
city2.population = 1241675
# create a new instance of the City class and
# define each attribute
city3 = City()
city3.name = "Seoul"
city3.country = "South Korea"
city3.elevation = 38
city3.population = 9733509
def max_elevation_city(min_population):
# Initialize the variable that will hold
# the information of the city with
# the highest elevation
return_city = City()
# Evaluate the 1st instance to meet the requirements:
# does city #1 have at least min_population and
# is its elevation the highest evaluated so far?
if min_population <= city1.population:
return_city = city1
# Evaluate the 2nd instance to meet the requirements:
# does city #2 have at least min_population and
# is its elevation the highest evaluated so far?
if min_population <= city2.population and city2.elevation >= return_city.elevation:
return_city = city2
# Evaluate the 3rd instance to meet the requirements:
# does city #3 have at least min_population and
# is its elevation the highest evaluated so far?
if min_population <= city3.population and city3.elevation >= return_city.elevation:
return_city = city3
#Format the return string
if return_city.name:
return "{}, {}".format(return_city.name, return_city.country)
else:
return ""
print(max_elevation_city(100000)) # Should print "Cusco, Peru"
print(max_elevation_city(1000000)) # Should print "Sofia, Bulgaria"
print(max_elevation_city(10000000)) # Should print ""
###Output
Cusco, Peru
Sofia, Bulgaria
###Markdown
**Question 4**What makes an object different from a class?**Answer:** An object is a specific instance of a class. **Question 5**We have two pieces of furniture: a brown wood table and a red leather couch. Fill in the blanks following the creation of each Furniture class instance, so that the describe_furniture function can format a sentence that describes these pieces as follows: "This piece of furniture is made of {color} {material}"
###Code
class Furniture:
color = ""
material = ""
table = Furniture()
table.color = "brown"
table.material = "wood"
couch = Furniture()
couch.color = "red"
couch.material = "leather"
def describe_furniture(piece):
return ("This piece of furniture is made of {} {}".format(piece.color, piece.material))
print(describe_furniture(table))
# Should be "This piece of furniture is made of brown wood"
print(describe_furniture(couch))
# Should be "This piece of furniture is made of red leather"
###Output
This piece of furniture is made of brown wood
This piece of furniture is made of red leather
|
examples/CMHE Experiments on Synthetic Data.ipynb | ###Markdown
Cox Mixtures with Heterogeneous Effects DemoAuthor: Mononito Goswami () 1. IntroductionEstimation of treatment efficacy of real-world clinical interventions involves working with continuous outcomes such as time-to-death, re-hospitalization, or a composite event that may be subject to censoring. Causal reasoning in such scenarios requires decoupling the effects of confounding physiological characteristics that affect baseline survival rates from the effects of the interventions being assessed. In this paper, we present a latent variable approach to model heterogeneous treatment effects by proposing that an individual can belong to one of latent clusters with distinct response characteristics. We show that this latent structure can mediate the base survival rates and helps determine the effects of an intervention. We demonstrate the ability of our approach to discover actionable phenotypes of individuals based on their treatment response on multiple large randomized clinical trials originally conducted to assess appropriate treatment strategies to reduce cardiovascular risk. Fig A. Schematic description of CMHE. Fig B. The CMHE Model in Plate Notation Schematic description of the proposed model is shown in **Figure A**. The set of features (confounders) $\mathbf{x}$ are passed through an encoder to obtain deep non-linear representations. These representations then describe the latent phenogroups $\mathbf{P}(Z|X=\mathbf{x})$ and $\mathbf{P}(\mathbf{\phi}|X=\mathbf{x})$ that determine the base survival rate and the treatment effect respectively. Finally, the individual level hazard (survival) curve under an intervention $A=\mathbf{a}$ is described by marginalizing over $Z$ and $\mathbf{\phi}$ as $\mathbf{S}(t|X=x, A=a) = \mathbf{E}_{(Z,\mathbf{\phi)}\sim \mathbf{P}(\cdot|X)}\big[ \mathbf{S}(t|A=\mathbf{a}, X, Z, \mathbf{\phi})\big]$. **Figure B** presents the proposed model in Plate Notation. $\mathbf{x}$ confounds treatment assignment $A$ and outcome $T$ (Model parameters and censoring distribution have been abstracted out). 2. Synthetic Data Example
###Code
import pandas as pd
import torch
from tqdm import tqdm
import sys
sys.path.append('../auton_survival/')
from datasets import load_dataset
from example_utils import *
###Output
_____no_output_____
###Markdown
2.1. Generative Process for the Synthetic Data 1. Features $x_1$, $x_2$ and the base survival phenotypes $Z$ are sampled from $\texttt{scikit-learn's make_blobs(...)}$ function which generates isotropic Gaussian blobs:$$[x_1, x_2], Z \sim \texttt{sklearn.datasets.make_blobs(K = 3)}$$2. Features $x_3$ and $x_4$ are sampled uniformly, whereas the underlying treatment effect phenotypes $\phi$ are defined according to an $L_1$-ball:$$ [x_1, x_2] \sim \texttt{Uniform}(-2, 2) $$$$ \phi \triangleq \mathbb{1}\{|x_3| + |x_3| > 2\} $$3. We then sample treat assignments from a Bernoulli distribution:$$ A \sim \texttt{Bernoulli}(\frac{1}{2}) $$4. Next, the time-to-event $T$ conditioned on the confounders $x$, latent $Z$ and latent effect group $\phi$ are generated from a Gompertz distribution:$$ T^{*}| (Z=k, {\phi}=m, A={a}) \sim \nonumber \texttt{Gompertz}\big({\beta}_{k}^{\top}{x} +({-a}^m)\big) $$5. Finally, the observed time $T$ is obtained after censoring some of the events and censoring time is chosen uniformly at random upto $T^*$:$$\delta \sim \texttt{Bernoulli}(\frac{3}{4}), \quad C \sim \texttt{Uniform}(0, {T}^{*})$$$$ T = \begin{cases} T^*, & \text{if } \delta = 1 \\ C, & \text{if } \delta = 0 \end{cases} $$
###Code
# Load the synthetic dataset
outcomes, features, interventions = load_dataset(dataset='SYNTHETIC')
# Let's take a look at take the dataset
features.head(5)
###Output
_____no_output_____
###Markdown
2.2. Visualizing the Synthetic Data
###Code
plot_synthetic_data(outcomes, features, interventions)
###Output
_____no_output_____
###Markdown
2.3 Train and Test data split
###Code
# Hyper-parameters
random_seed = 0
test_size = 0.25
# Split the synthetic data into training and testing data
import numpy as np
np.random.seed(random_seed)
n = features.shape[0]
test_idx = np.zeros(n).astype('bool')
test_idx[np.random.randint(n, size=int(n*test_size))] = True
features_tr = features.iloc[~test_idx]
outcomes_tr = outcomes.iloc[~test_idx]
interventions_tr = interventions[~test_idx]
print(f'Number of training data points: {len(features_tr)}')
features_te = features.iloc[test_idx]
outcomes_te = outcomes.iloc[test_idx]
interventions_te = interventions[test_idx]
print(f'Number of test data points: {len(features_te)}')
x_tr = features_tr.values.astype('float32')
t_tr = outcomes_tr['time'].values.astype('float32')
e_tr = outcomes_tr['event'].values.astype('float32')
a_tr = interventions_tr.values.astype('float32')
x_te = features_te.values.astype('float32')
t_te = outcomes_te['time'].values.astype('float32')
e_te = outcomes_te['event'].values.astype('float32')
a_te = interventions_te.values.astype('float32')
print('Training Data Statistics:')
print(f'Shape of covariates: {x_tr.shape} | times: {t_tr.shape} | events: {e_tr.shape} | interventions: {a_tr.shape}')
def find_max_treatment_effect_phenotype(g, zeta_probs, factual_outcomes):
"""
Find the group with the maximum treatement effect phenotype
"""
mean_differential_survival = np.zeros(zeta_probs.shape[1]) # Area under treatment phenotype group
outcomes_train, interventions_train = factual_outcomes
# Assign each individual to their treatment phenotype group
for gr in range(g): # For each treatment phenotype group
# Probability of belonging the the g^th treatment phenotype
zeta_probs_g = zeta_probs[:, gr]
# Consider only those individuals who are in the top 75 percentiles in this phenotype
z_mask = zeta_probs_g>np.quantile(zeta_probs_g, 0.75)
mean_differential_survival[gr] = find_mean_differential_survival(
outcomes_train.loc[z_mask], interventions_train.loc[z_mask])
return np.nanargmax(mean_differential_survival)
###Output
_____no_output_____
###Markdown
3. CMHE for Counterfactual Regression 3.1 Train CMHE for Counterfactual Regression
###Code
# Hyper-parameters to train model
k = 1 # number of underlying base survival phenotypes
g = 2 # number of underlying treatment effect phenotypes.
layers = [50, 50] # number of neurons in each hidden layer.
model_random_seed = 3
iters = 50 # number of training epochs
learning_rate = 0.001
batch_size = 128
vsize = 0.15 # size of the validation split
patience = 3
optimizer = "Adam"
from models.cmhe import DeepCoxMixturesHeterogenousEffects
torch.manual_seed(model_random_seed)
np.random.seed(model_random_seed)
# Instantiate the CMHE model
model = DeepCoxMixturesHeterogenousEffects(k=k, g=g, layers=layers)
model = model.fit(x_tr, t_tr, e_tr, a_tr, vsize=vsize, val_data=None, iters=iters,
learning_rate=learning_rate, batch_size=batch_size,
optimizer=optimizer, random_state=model_random_seed,
patience=patience)
print(f'Treatment Effect for the {g} groups: {model.torch_model[0].omega.detach()}')
zeta_probs_train = model.predict_latent_phi(x_tr)
zeta_train = np.argmax(zeta_probs_train, axis=1)
print(f'Distribution of individuals in each treatement phenotype in the training data: \
{np.unique(zeta_train, return_counts=True)[1]}')
max_treat_idx_CMHE = find_max_treatment_effect_phenotype(
g=2, zeta_probs=zeta_probs_train, factual_outcomes=(outcomes_tr, interventions_tr))
print(f'\nGroup {max_treat_idx_CMHE} has the maximum restricted mean survival time on the training data!')
###Output
Treatment Effect for the 2 groups: tensor([-0.5131, 0.3845])
Distribution of individuals in each treatement phenotype in the training data: [1968 1931]
Group 1 has the maximum restricted mean survival time on the training data!
###Markdown
3.2 Evaluate CMHE on Test Data
###Code
# Now for each individual in the test data, let's find the probability that
# they belong to the max treatment effect group
zeta_probs_test_CMHE = model.predict_latent_phi(x_te)
zeta_test = np.argmax(zeta_probs_test_CMHE, axis=1)
print(f'Distribution of individuals in each treatement phenotype in the test data: \
{np.unique(zeta_test, return_counts=True)[1]}')
# Now let us evaluate our performance
plot_phenotypes_roc(outcomes_te, zeta_probs_test_CMHE[:, max_treat_idx_CMHE])
###Output
Distribution of individuals in each treatement phenotype in the test data: [584 517]
###Markdown
3.3 Comparison with the Clustering phenotyper We compare the ability of CMHE against dimensionality reduction followed by clustering for counterfactual phenotyping. Specifically, we first perform dimensionality reduction of the input confounders, $\mathbf{x}$, followed by clustering. Due to a small number of confounders in the synthetic data, in the following experiment, we directly perform clustering using a Gaussian Mixture Model (GMM) with 2 components and diagonal covariance matrices.
###Code
from phenotyping import ClusteringPhenotyper
from sklearn.metrics import auc
clustering_method = 'gmm'
dim_red_method = None # We would not perform dimensionality reduction for the synthetic dataset
n_components = None
n_clusters = 2 # Number of underlying treatment effect phenotypes
# Running the phenotyper
phenotyper = ClusteringPhenotyper(clustering_method=clustering_method,
dim_red_method=dim_red_method,
n_components=n_components,
n_clusters=n_clusters)
zeta_probs_train = phenotyper.fit_phenotype(features_tr.values)
zeta_train = np.argmax(zeta_probs_train, axis=1)
print(f'Distribution of individuals in each treatement phenotype in the training data: \
{np.unique(zeta_train, return_counts=True)[1]}')
max_treat_idx_CP = find_max_treatment_effect_phenotype(
g=2, zeta_probs=zeta_probs_train, factual_outcomes=(outcomes_tr, interventions_tr))
print(f'\nGroup {max_treat_idx_CP} has the maximum restricted mean survival time on the training data!')
###Output
No Dimensionaity reduction specified...
Proceeding to learn clusters with the raw features...
Fitting the following Clustering Model:
GaussianMixture(covariance_type='diag', n_components=3)
Distribution of individuals in each treatement phenotype in the training data: [ 612 1408 1879]
Group 2 has the maximum restricted mean survival time on the training data!
###Markdown
3.4 Evaluate Clustering Phenotyper on Test Data
###Code
# Now for each individual in the test data, let's find the probability that
# they belong to the max treatment effect group
# Use the phenotyper trained on training data to phenotype on testing data
zeta_probs_test_CP = phenotyper.phenotype(x_te)
zeta_test_CP = np.argmax(zeta_probs_test_CP, axis=1)
print(f'Distribution of individuals in each treatement phenotype in the test data: \
{np.unique(zeta_test_CP, return_counts=True)[1]}')
# Now let us evaluate our performance
plot_phenotypes_roc(outcomes_te, zeta_probs_test_CP[:, max_treat_idx_CP])
###Output
Distribution of individuals in each treatement phenotype in the test data: [151 445 505]
###Markdown
4. CMHE for Factual Regression For completeness, we further evaluate the performance of CMHE in estimating factual risk over multiple time horizons using the standard survival analysis metrics, including: 1. $\textbf{Brier Score} \ (\textrm{BS})$: Defined as the Mean Squared Error (MSE) around the probabilistic prediction at a certain time horizon.\begin{align}\text{BS}(t) = \mathop{\mathbf{E}}_{x\sim\mathcal{D}}\big[ ||\mathbf{1}\{ T > t \} - \widehat{\mathbf{P}}(T>t|X)\big)||_{_\textbf{2}}^\textbf{2} \big]\end{align}2. $ \textbf{Time Dependent Concordance Index} \ (C^{\text{td}}$): A rank order statistic that computes model performance in ranking patients based on their estimated risk at a specfic time horizon.\begin{align}C^{td }(t) = \mathbf{P}\big( \hat{F}(t| \mathbf{x}_i) > \hat{F}(t| \mathbf{x}_j) | \delta_i=1, T_i<T_j, T_i \leq t \big) \end{align}We compute the censoring adjusted estimates of the Time Dependent Concordance Index (Antolini et al., 2005; Gerds et al., 2013) and the Integrated Brier Score (i.e. Brier Score integrated over 1, 3 and 5 years. $\text{IBS} = \mathop{\sum}_t \frac{t}{t_\text{max}} \cdot \text{BS}(t)$) (Gerds and Schumacher, 2006; Graf et al., 1999) to assess both discriminative performance and model calibration at multiple time horizons.*We find that CMHE had similar or better discriminative performance than a simple Cox Model with MLP hazard functions. CMHE was also better calibrated as evidenced by overall lower Integrated Brier Score, suggesting utility for factual risk estimation.* 4.1 Evaluate CMHE on Test Data
###Code
horizons = [1, 3, 5]
# Now let us predict survival using CMHE
predictions_test_CMHE = model.predict_survival(x_te, a_te, t=horizons)
CI1, CI3, CI5, IBS = factual_evaluate((x_tr, t_tr, e_tr, a_tr), (x_te, t_te, e_te, a_te),
horizons, predictions_test_CMHE)
print(f'Concordance Index (1 Year): {np.around(CI1, 4)} (3 Year) {np.around(CI3, 4)}: (5 Year): {np.around(CI5, 4)}')
print(f'Integrated Brier Score: {np.around(IBS, 4)}')
###Output
Concordance Index (1 Year): 0.6876 (3 Year) 0.6947: (5 Year): 0.6968
Integrated Brier Score: 0.1514
###Markdown
4.2 Comparison with Deep Cox-proportional Hazards Model
###Code
from auton_survival.estimators import SurvivalModel
# Now let us train a Deep Cox-proportional Hazard model with two linear layers and tanh activations
random_seed = 0
hyperparams = {'layers':[[50, 50]],
'lr':[1e-3],
'bs':[128],
'activation':['tanh']}
dcph_model = SurvivalModel(model='dcph', random_seed=0, hyperparams=hyperparams)
interventions_tr.name, interventions_te.name = 'treat', 'treat'
features_tr_dcph = pd.concat([features_tr, interventions_tr], axis=1)
features_te_dcph = pd.concat([features_te, interventions_te], axis=1)
# Train the DCPH model
dcph_model = dcph_model.fit(features_tr_dcph, outcomes_tr)
###Output
0: [0s / 0s], train_loss: 3.4532, val_loss: 3.4632
1: [0s / 0s], train_loss: 3.3954, val_loss: 3.4371
2: [0s / 0s], train_loss: 3.3650, val_loss: 3.4152
3: [0s / 0s], train_loss: 3.3477, val_loss: 3.4047
4: [0s / 0s], train_loss: 3.3262, val_loss: 3.3957
5: [0s / 1s], train_loss: 3.3250, val_loss: 3.3977
6: [0s / 1s], train_loss: 3.3166, val_loss: 3.3921
7: [0s / 1s], train_loss: 3.3198, val_loss: 3.3872
8: [0s / 1s], train_loss: 3.3122, val_loss: 3.3956
9: [0s / 1s], train_loss: 3.3094, val_loss: 3.3870
10: [0s / 1s], train_loss: 3.3096, val_loss: 3.3898
11: [0s / 1s], train_loss: 3.3086, val_loss: 3.3829
12: [0s / 2s], train_loss: 3.3122, val_loss: 3.3848
13: [0s / 2s], train_loss: 3.3011, val_loss: 3.3826
14: [0s / 2s], train_loss: 3.3027, val_loss: 3.3834
15: [0s / 2s], train_loss: 3.2962, val_loss: 3.3813
16: [0s / 2s], train_loss: 3.2998, val_loss: 3.3841
17: [0s / 2s], train_loss: 3.2925, val_loss: 3.3838
18: [0s / 2s], train_loss: 3.2986, val_loss: 3.3823
19: [0s / 2s], train_loss: 3.2884, val_loss: 3.3801
20: [0s / 2s], train_loss: 3.2876, val_loss: 3.3791
21: [0s / 3s], train_loss: 3.2769, val_loss: 3.3832
22: [0s / 3s], train_loss: 3.2780, val_loss: 3.3746
23: [0s / 3s], train_loss: 3.2781, val_loss: 3.3786
24: [0s / 3s], train_loss: 3.2726, val_loss: 3.3837
25: [0s / 3s], train_loss: 3.2717, val_loss: 3.3707
26: [0s / 3s], train_loss: 3.2811, val_loss: 3.3745
27: [0s / 3s], train_loss: 3.2788, val_loss: 3.3713
28: [0s / 4s], train_loss: 3.2759, val_loss: 3.3757
29: [0s / 4s], train_loss: 3.2737, val_loss: 3.3727
30: [0s / 4s], train_loss: 3.2634, val_loss: 3.3662
31: [0s / 4s], train_loss: 3.2579, val_loss: 3.3700
32: [0s / 4s], train_loss: 3.2546, val_loss: 3.3659
33: [0s / 4s], train_loss: 3.2525, val_loss: 3.3711
34: [0s / 4s], train_loss: 3.2624, val_loss: 3.3742
35: [0s / 4s], train_loss: 3.2626, val_loss: 3.3682
36: [0s / 5s], train_loss: 3.2522, val_loss: 3.3710
37: [0s / 5s], train_loss: 3.2538, val_loss: 3.3656
38: [0s / 5s], train_loss: 3.2633, val_loss: 3.3629
39: [0s / 5s], train_loss: 3.2537, val_loss: 3.3658
40: [0s / 5s], train_loss: 3.2494, val_loss: 3.3608
41: [0s / 5s], train_loss: 3.2437, val_loss: 3.3665
42: [0s / 5s], train_loss: 3.2431, val_loss: 3.3643
43: [0s / 5s], train_loss: 3.2516, val_loss: 3.3680
44: [0s / 6s], train_loss: 3.2487, val_loss: 3.3597
45: [0s / 6s], train_loss: 3.2450, val_loss: 3.3704
46: [0s / 6s], train_loss: 3.2503, val_loss: 3.3636
47: [0s / 6s], train_loss: 3.2525, val_loss: 3.3664
48: [0s / 6s], train_loss: 3.2386, val_loss: 3.3634
49: [0s / 6s], train_loss: 3.2405, val_loss: 3.3588
###Markdown
4.3 Evaluate DCPH on Test Data
###Code
# Find suvival scores in the test data
predictions_test_DCPH = dcph_model.predict_survival(features_te_dcph, horizons)
CI1, CI3, CI5, IBS = factual_evaluate((x_tr, t_tr, e_tr, a_tr), (x_te, t_te, e_te, a_te),
horizons, predictions_test_DCPH)
print(f'Concordance Index (1 Year): {np.around(CI1, 4)} (3 Year) {np.around(CI3, 4)}: (5 Year): {np.around(CI5, 4)}')
print(f'Integrated Brier Score: {np.around(IBS, 4)}')
###Output
Concordance Index (1 Year): 0.6908 (3 Year) 0.6926: (5 Year): 0.6946
Integrated Brier Score: 0.1531
|
_notebooks/2020-11-29-pca.ipynb | ###Markdown
"Top og bund i dansk politik"> "Dansk politik er meget mere end et valg mellem højre og venstre. En analyse af hvordan danske politikere har stemt i folketinget 2019-2020 viser, at kampen mellem 'toppen' og 'bunden', dvs. mellem et politisk establishment bestående af de store midterpartier, og et anti-establishment bestående af yderfløjspartierne, er næste lige så stor som kampen mellem højre- og venstrefløj."- toc: true- branch: master- badges: true- image: images/ft151.png- comments: true- hide: false- search_exclude: true- author: Robin Engelhardt- categories: [notebook, pca, folketinget, politik]- show-tags: true Vi har for vane at opdele politik i højre- og venstrefløj. Den historiske årsag er oplysende: det var sådan medlemmerne af den franske nationalforsamling placerede sig efter den franske revolution i 1789. Dem til højre i salen var loyale over for konge og kirke, dem til venstre støttede revolutionen. På den måde undgik man de værste albuehug og [slåskampe](https://www.youtube.com/watch?v=GSXQ1ZgH7NQ) mens man skændtes om Frankrings fremtid.Hvordan burde de danske medlemmer af folketinget sidde i dag, hvis vi ville minimere risikoen for den slags håndgemæng? Spurgt på en anden måde: hvordan placerer man danske politiker i et lokale så deres politiske uenighed afspejles bedst muligt af deres indbyrdes afstande på en to-dimensionel flade? Der findes faktisk en simpel matematisk metode til at finde ud af det på. Den kaldes en 'principal component analyse' (pca), og bliver brugt flittigt i maskinlæring og til at lave undersøgende dataanalyse. Gevinsten ved at bruge en pca på folketingets afstemninger er, at vi kan se om de politiske partier vitterlig stemmer i forhold til det vil forestiller os som en højre- og venstrefløj i dansk politik. Vi vil med andre ord kunne svare på om det virkelig er rigtig, at Enhedslisten og Dansk Folkeparti er længst fra hinanden. Vi vil også kunne finde ud af, hvordan partierne grupperer sig i forhold til andre akser, hvor afstandene måske er lige så store. Da der jo er lige så mange politiske synspunkter i folketinget som der er folketingsmedlemmer, er pca'en en rigtig god måde til at reducere de mange uenigheder ned til de to eller tre mest betydningsfulde uenigheds-typer der kendetegner dansk politik. En analyse på tværs af årene vil desuden kunne vise, hvordan partierne har bevæget sig i forhold til hinanden i løbet af årene.Vi starter med at se på det danske folketing anno 2020:  &160; Enhedslisten: 13 mandater &160; Socialistisk Folkeparti: 15 mandater &160; Siumut&160;: 1 mandat &160; Socialdemokratiet: 48 mandater &160; Alternativet: 1 mandat &160; Uden for folketingsgrupperne: 4 mandater &160; Radikale Venstre: 16 mandater &160; Sambandsflokkurin: 1 mandat &160; Javnaðarflokkurin: 1 mandat &160; Inuit Ataqatigiit: 1 mandat &160; Venstre: 42 mandater &160; Det Konservative Folkeparti: 13 mandater &160; Liberal Alliance: 3 mandater &160; Dansk Folkeparti: 16 mandater &160; Nye Borgerlige: 4 mandater Der er ti danske partier og fire oversøiske. Og så er der fire (pr. 4. dec 5) mandater uden for folketingsgrupperne som udgør løsgængerne fra de Frie Grønne samt Simon Emil Ammitzbøll-Bille. Alle data finder vi via den frit tilgængelige database på oda.ft.dk.
###Code
#collapse-hide
import sys
import pyodbc
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import xml.etree.ElementTree as ET
from collections import Counter
from sklearn.decomposition import PCA
from adjustText import adjust_text
import seaborn as sns
sns.set_style('white')
sns.set_context('notebook')
plt.rcParams["font.family"] = "sans-serif"
PLOTS_DIR = 'images'
%matplotlib inline
###Output
_____no_output_____
###Markdown
Vi starter med at kalde SQL serveren og skrive en søgning til oda-databasen, hvorefter resultatet importeres til en pandas dataframe. Den periode vi er interesseret i er perioden 2019-2020 som har id = 151.
###Code
#collapse-hide
periodid = 151
conn = pyodbc.connect('Driver={SQL Server};'
'Server=HUM1006903\SQLEXPRESS;'
'Database=oda_20201103;'
#'Database=oda;'
'Trusted_Connection=yes;')
cursor = conn.cursor()
sql_query = pd.read_sql_query('SELECT \
oda.dbo.Afstemning.id AS afstemning, \
oda.dbo.Sag.periodeid, \
oda.dbo.Periode.titel AS periode, \
oda.dbo.Sag.titel AS titel, \
oda.dbo.Sag.resume, \
oda.dbo.Afstemning.konklusion, \
oda.dbo.Sagstrin.dato, \
oda.dbo.Stemme.typeid AS stemme, \
oda.dbo.Aktør.fornavn, \
oda.dbo.Aktør.efternavn, \
oda.dbo.Aktør.biografi \
FROM oda.dbo.Afstemning \
JOIN oda.dbo.Stemme ON oda.dbo.Afstemning.id = oda.dbo.Stemme.afstemningid \
JOIN oda.dbo.Aktør ON oda.dbo.Aktør.id = oda.dbo.Stemme.aktørid \
JOIN oda.dbo.Sagstrin ON oda.dbo.Sagstrin.id = oda.dbo.Afstemning.sagstrinid \
JOIN oda.dbo.Sag ON oda.dbo.Sag.id = oda.dbo.Sagstrin.sagid \
JOIN oda.dbo.Periode ON oda.dbo.Sag.periodeid = oda.dbo.Periode.id \
WHERE oda.dbo.Sag.periodeid='+str(periodid)+';', conn)
sql_query.head()
#hide
period_txt = str(sql_query['periodeid'].unique()[0]) + ': ' + sql_query.periode.unique()[0]
period_txt
#show
print(len(sql_query.afstemning.unique()))
print(len(sql_query))
###Output
323
57817
###Markdown
Som man kan se, er der kun 323 afstemninger og knap 58.000 rækker i dataframen. Coronakrisen i 2020 betød at afstemninger mellem 28. maj 2020 og slut oktober blev foretaget ved håndsoprækkelse og derfor ikke er inkluderet i databasen. I det følgende skal data renses og formatteres. Vi kan se, at tabellen ikke indeholder nogen kolonne, der angiver hvilket parti folketingsmedlemmerne tilhører. Det haves ikke i databasen. Eneste sted jeg kan se, at man kan finde et folketingsmedlems partitilknytning i data er i "biografi"-kolonnen, som består af en masse XML tags. For at ekstrahere partinavnet, der er placeret mellem tagsene "party", bruger jeg bibliotekten xml.etree:
###Code
#collapse-hide
party = []
for bio_string in sql_query['biografi'].values:
try:
root = ET.fromstring(bio_string)
for child in root.findall("./party"):
party.append(child.text)
except Exception as e:
party.append(None)
continue
sql_query['party'] = party
#sql_query.party.unique()
#hide
# her tjekker vi lige at alle har et tilknyttet parti:
for index, row in sql_query.iterrows():
if row.party == None:
print(index, row['afstemning_id'], row['fornavn'], row['efternavn'], row['stemme'])
###Output
_____no_output_____
###Markdown
Næste skridt i rensningen af data består i at omkode de enkelte stemmer så deres numeriske værdi er normaliseret og kan bruges i analysen. Folketingets database har kodet dem sådan at et 1-tal betyder en stemme FOR, et 2-tal betyder IMOD, et 3-tal betyder FRAVÆR, og et 4-tal betyder "Hverken for eller imod". I stedet koder jeg dem sådan at 1 betyder FOR, -1 betyder "IMOD", og 0 betyder "hverken for eller imod. Desuden samler jeg for- og efternavn og beholder kun de kolonner, vi har brug for:
###Code
#collapse-hide
sql_query['navn'] = sql_query[['fornavn', 'efternavn']].agg(' '.join, axis=1)
#collapse-hide
df = sql_query[['afstemning', 'titel', 'resume', 'konklusion', 'navn', 'party', 'stemme']]
df['stemme'].replace(to_replace=2, value=-1, inplace=True)
df['stemme'].replace(to_replace=4, value=0, inplace=True)
df.tail()
###Output
_____no_output_____
###Markdown
Lad os lige se hvor mange stemmer der er i hver kategori:
###Code
#show
Counter(df.stemme)
###Output
_____no_output_____
###Markdown
Desværre viser det sig at der er rigtig mange fravær i folketinget (mange 3-taller), og manglende stemmer gør det vanskeligt at foretage en ordentlig PCA, fordi de gængs PCA-algoritmer ikke kan klare NaNs. Da der er mødepligt i folketinget, og alle folketingsmedlemmer SKAL stemme ved alle afstemninger, er det jo en mærkelig sag. Men det viser sig, at folketingets partier benytter sig af såkaldte 'clearingsaftaler', som er private aftaler mellem de forskellige folketingsgrupper. Aftalerne sikrer, at et antal folketingsmedlemmer fra hver partigruppe kan få 'fri' fra afstemningerne i folketingssalen, uden at der derved rokkes ved, hvilke partier som har flertal i Folketinget, eller ved, at der skal være mindst 90 medlemmer til stede, for at Folketinget er beslutningsdygtigt. Clearingaftalerne giver dermed mulighed for, at de politiske aktiviteter ikke går i stå, selv om der er møde og afstemninger i salen. Aftalerne giver plads til, at medlemmerne kan deltage i f.eks. politiske møder eller deltage i andre aktiviteter uden for Christiansborg, uden at afstemningerne i folketingssalen af den grund får et utilsigtet udfald.I praksis indgås aftalerne typisk ved, at partigrupperne parvist aftaler for en hel folketingssamling, hvor mange medlemmer hver partigruppe kan 'cleare' hos hinanden - dvs. give lov til at blive væk fra afstemningerne, fordi den politiske modpart også beder et antal medlemmer blive væk. Når et medlem er clearet, stemmer medlemmet ikke i salen den pågældende dag. Hvordan clearingerne fordeles på medlemmer i partigrupperne kan veksle fra dag til dag, alt efter hvem der har behov for at være fri for at deltage i afstemningerne i salen. Det er typisk gruppesekretæren i den enkelte folketingsgruppe, som koordinerer fordelingen af clearingerne og sikrer, at man kan stille med det aftalte antal medlemmer ved eventuelle afstemninger. Det er også typisk gruppesekretæren, som tager kontakt til sin modpart i de andre partigrupper, hvis man pludselig har mandefald på grund af f.eks. sygdom eller lignende, og aftaler de nødvendige yderligere clearinger for en relevant periode. Det typiske mønster for clearingaftalerne er, at regeringspartierne clearer medlemmer med deres umiddelbare modpart. Dermed sikrer man lettest, at den politiske balance er bevaret på trods af clearingerne.Efter en samtale med partisekretær Annette Lind (S), som i nuværende folketingssamling en den der sammen med Erling Bonnesen (V) koordinerer clearningsaftalerne for alle partierne, forstå jeg at hvis en person bliver clearet, så vil den person ALTID stemme ihht. partilinjen. Det vil sige at jeg kan skrive en funktion, der erstatter alle fravær med typetallet ("mode") for partiet for den givne afstemning. I tilfælde af at fravær er den hyppigste adfærd, vælger jeg i stedet den anden mest hyppige stemmetype. Og hvis alle medlemmer af et parti har været fraværende ved en afstemning (hvilket hyppigt sker for de grønlandske og færøske stemmer), sætter jeg dem til at være hverken for eller imod, dvs. til 0. Hvis et folketingsmedlem er "Uden for folketingsgrupperne", ændrer jeg FRAVÆR til "Hverken for eller imod" da jeg ikke kunne få bekræftet om løsgængerne også benytter sig af clearningsaftaler.
###Code
#collapse-hide
def get_most_frequent_vote(afstemning, party):
df_ap = df[(df.afstemning == afstemning) & (df.party == party)]
party_votes = df_ap.stemme.values
cnt = Counter(party_votes)
mostfrequent_vote = cnt.most_common()[0][0]
if mostfrequent_vote == 3:
try:
mostfrequent_vote = cnt.most_common()[1][0] # set the second most frequent vote as the most frequent one.
except: # if all members of the party have been absent
mostfrequent_vote = 0 # set their most frequent vote to be "abstain"
return mostfrequent_vote
for i, row in df.iterrows():
if row['stemme'] == 3:
if row['party'] == 'Uden for folketingsgrupperne':
df.at[i,'stemme'] = 0
else:
ifor_val = get_most_frequent_vote(row['afstemning'], row['party'])
df.at[i,'stemme'] = ifor_val
Counter(df.stemme)
###Output
_____no_output_____
###Markdown
For at få data i det rigtige format, bliver vi dernæst nød til at reorganisere tabellen således at rækkerne viser de enkelte folketingsmedlemmer, søjlerne de enkelte afstemninger, og selve cellerne indeholder så stemmerne. Vi kan bruge pivot-funtionen i python:
###Code
#collapse-hide
dp = df.pivot_table(index = ['navn', 'party'], columns=['afstemning'], values=['stemme'])
dp.dropna(inplace=True)
dp.columns = [col[1] for col in dp.columns] # get rid of the extra multicolumn "vote"
dp = dp.reset_index(level=['navn', 'party']) # make the multiindex [name, party] into two columns
dp.head()
###Output
_____no_output_____
###Markdown
Dernæst giver vi partierne en farve, så vi kan kende forskel på dem på det resulterende plot:
###Code
#collapse-hide
# compute the color that each MP should be, based on their party. color codes are taken from https://www.dr.dk/om-dr/designmanager/temapakker/folketingsvalg-2019
color_dict = {
'Enhedslisten' : '#E6801A',
'Socialistisk Folkeparti' : '#E07EA8',
'Sambandsflokkurin' : '#41b6c4',
'Javnaðarflokkurin' : '#67001f',
'Socialdemokratiet' : '#A82721',
'Siumut' : '#ef3b2c',
'Radikale Venstre' : '#733280',
'Inuit Ataqatigiit' : '#980043',
'Det Konservative Folkeparti' : '#96B226',
'Liberal Alliance' : '#3FB2BE',
'Venstre' : '#254264',
'Dansk Folkeparti': '#EAC73E',
'Uden for folketingsgrupperne' : '#737373',
'nan' : 'black',
'Alternativet' : '#2B8738',
'Nye Borgerlige' : '#127B7F',
'Kristendemokraterne' : '#8B8474',
'Klaus Riskær Pedersen' : '#6C8BB8',
'Stram Kurs' : '#998F4D',
'Nunatta Qitornai' : '#c51b8a',
'Tjóðveldi' : '#a6d96a'
}
def party_color(x):
return color_dict.get(str(x),'black')
colors = [party_color(x) for x in dp['party']]
###Output
_____no_output_____
###Markdown
De originale data har lige så mange dimensioner som folketingsmedlemmer, men pca'en reducerer dem ned til tre eller endda kun to dimensioner. De to/tre dimensioner er så til gengæld dem, der viser størst varians i folketingsmedlemmernes stemmeadfærd, og kan derfor bruges som indikatorer for hvor meget folketingsmedlemmerne er uenige med hinanden. Dimensionerne er desuden ortogonale på hinanden, hvilket betyder at de er uafhængige af hinanden. Enhver korrelation mellem afstemningerne transformeres således til linæere ukorrelerede variable der kaldes 'komponenter'. PCA er altså en "usuperviseret" metode, der beregnet afstanden mellem partierne som en kompleks blanding af, hvordan der blev stemt i de 323 afstemninger der blev registreret i folketingssamlingen 2019-2020.For at dimensions-reduktionen kan give et nogenlunde retvisende billede af forskellene er det dog vigtigt at de 2-3 principale komponenter fanger størstedelen af variansen i data. Forneden vælger vi antallet af komponenter til at være 3, og kalder den første komponent for xvector, den anden komponent for yvector, og den tredje zvector.
###Code
#collapse-hide
num_folketingsmedlemmer = len(dp)
num_bills = len(dp.columns)-2
bills = dp.columns[2:num_bills+2]
dat = dp.iloc[:,2:num_bills+2]
pca = PCA(n_components=3)
pca.fit(dat)
xvector = pca.components_[0]
yvector = pca.components_[1]
zvector = pca.components_[2]
xs = pca.transform(dat)[:,0]
ys = pca.transform(dat)[:,1]
zs = pca.transform(dat)[:,2]
pca.explained_variance_, pca.explained_variance_ratio_
###Output
_____no_output_____
###Markdown
Som det kan ses fanger den første komponent 44,4 procent af variansen i data. Det er ikke så meget som håbet (i de tidligere folketingsår kommer den typisk op på 60-70%), men skyldes nok det specielle Corona-år vi har haft. Vi må leve med det, og kan nu plotte resultatet for de to første komponenter:
###Code
#collapse-hide
fig, ax = plt.subplots()
ix_high = np.argsort(xvector)[-5:] # returns an array of sorted indexes of the components
ix_low = np.argsort(xvector)[:5]
iy_high = np.argsort(yvector)[-5:] # returns an array of sorted indexes of the components
iy_low = np.argsort(yvector)[:5]
ix_highest_and_lowest_comps = np.append(ix_high, ix_low)
iy_highest_and_lowest_comps = np.append(iy_high, iy_low)
def get_arr_index_colors(color):
# returns an array of indexes in the colors array corresponding to a certain party with color "color"
col_mask = np.where(np.array(colors) == color,True,False)
col_index = np.arange(0, len(colors))[col_mask]
return col_index
for color in np.unique(colors):
ix_color = get_arr_index_colors(color)
ax.scatter(xs[ix_color], ys[ix_color], c = color, label = list(color_dict.keys())[list(color_dict.values()).index(color)])
for i in ix_color:
ax.annotate(dp.iloc[i]['navn'], (xs[i], ys[i]), fontsize=2)
for i in ix_highest_and_lowest_comps:
# arrows project features as vectors onto PC axes
plt.arrow(0, 0, xvector[i]*max(xs)*2, yvector[i]*max(ys)*2,
color='grey', width=0.0005, head_width=0.005)
texts = [plt.text(xvector[i]*max(xs)*2.2, yvector[i]*max(ys)*2.2,
list(dat.columns.values)[i], color='black', fontsize=3)]
for i in iy_highest_and_lowest_comps:
# arrows project features as vectors onto PC axes
plt.arrow(0, 0, xvector[i]*max(xs)*2, yvector[i]*max(ys)*2,
color='grey', width=0.0005, head_width=0.005)
plt.text(xvector[i]*max(xs)*2.2, yvector[i]*max(ys)*2.2,
list(dat.columns.values)[i], color='black', fontsize=3)
plt.scatter(0,0, color='white', s=4, zorder=20)
adjust_text(texts)
lgd = ax.legend(title=str(num_bills)+' afstemninger', prop={'size': 10}, bbox_to_anchor=(1.05, 1))
ax.set_title('Folketingsperiode ' + period_txt, fontsize=14)
# invert the x-axis so that the "left wing" goes to the left and the "right wing" to the right. First grab a reference to the current axes and then set the xlimits to be the reverse of the current xlimits
ax = plt.gca()
ax.set_xlim(ax.get_xlim()[::-1]) # vi vender akserne om, så det matcher med det visuelle udtryk om at venstrefløjen er på venstre side og højrefløjen på højre side.
ax.set_ylim(ax.get_ylim()[::-1])
#ax.set_xlim([25,-15])
#ax.set_ylim([12.5,-12.5])
#plt.tight_layout()
# Remember: save as pdf and transparent=True for Adobe Illustrator
if not os.path.exists(PLOTS_DIR):
os.makedirs(PLOTS_DIR)
plt.savefig(os.path.join(PLOTS_DIR, 'ft'+str(periodid)+'.png'), bbox_extra_artists=(lgd,), bbox_inches='tight', transparent=True, dpi=800)
plt.savefig(os.path.join(PLOTS_DIR, 'ft'+str(periodid)+'.pdf'), bbox_extra_artists=(lgd,), bbox_inches='tight', transparent=True, dpi=800)
plt.close()
###Output
_____no_output_____
###Markdown
 x-aksen giver et ganske godt billede af hvad vi normalt forestiller os som den ideologiske højre-venstre akse i dansk politik. Enhedslisten ligger yderst til venstre, og Dansk Folkeparti ligger yderst til højre (i hvert fald i årene før folketingsvalget i efteråret 2018), og i midten øverst ligger Socialdemokratiet. Der er dog et par overraskelser. De Radikale venstre ligger klart til venstre for Socialdemokratiet, ja faktisk tættere på SF end på S. De mest 'højreorienterede' partier i dansk politik er hverken Dansk Folkeparti eller de Nye Borgerlige. Det er Venstre og Det Konservative Folkeparti, to partier der i øvrigt er stort set uskelnelige i deres stemmerafgivelser. De oversøiske mandater roder rundt i midten, hvilket primært er en konsekvens af at de ofte afholder sig fra at stemme eller stemmer 'hverken for eller imod'. Pilene, som udspringer fra (0,0) er de fem største eigenvektorer for covariansmatricen for hver komponent, og repræsenterer kernen i en pca: de viser retningen på de afstemninger som er væsentligst for de to første komponenter. F.eks. peger en vektor i retning af kl. 7:30, og den fortæller os, at afstemning 7393 (Forslag til lov om dyrevelfærd) var een af de afstemninger, der adskildte de venstreorienterede og "bund"-partierne mest fra resten (forslaget blev forkastet. For stemte 41 - DF, RV, SF, EL og ALT, og imod stemte 62 - S, V, KF, NB og LA).
###Code
#hide
#Vi kan også prøve at kigge på løsgængerne stemmeadfærd:
#for lg in dp[dp.party == 'Uden for folketingsgrupperne'].navn.unique():
#print(lg, Counter(sql_query[sql_query.navn == lg].stemme))
#som viser at Uffe Elbæk og Simon Emil Ammitzbøll-Bille (begge i midten af x-aksen) været fraværende ved hhv 71% og 84% af alle registrerede afstemninger i folketingssalen (da der er møde- og stemmepligt, og de to næppe har lavet clearingsaftaler, er det måske en sag der er værd at undersøge nærmere). Sikandar Siddique (på venstre side af x-aksen tæt på SF) derimod deltager flittigt i afstemningerne.
###Output
_____no_output_____
###Markdown
Lad os prøve at se lidt nærmere på denne første komponent i PCA-analysen, som altså projicerer ned på en akse de største forskelle mellem de danske politikere, og som står for circa 45 % af variansen i data. Vi starter med at plotte et heat map for at se, hvilke afstemninger der har haft størst og mindst betydning for denne spredning.
###Code
#collapse-hide
# da der er 323 afstemninger tilføjer vil et ekstra element så vi har 12x27 elementer,
# som vi så kan plotte ved at bruge matshow. Jeg har brugt en farvekode som angiver de
# afstemninger der er vigtige for højrefløjen med blå, og dem der er vigtige for
# venstrefløjen med rød.
b = np.append(bills, [0])
b = b.reshape(12,27)
w = np.append(xvector, [0])
w = w.reshape(12,27)
fig, ax = plt.subplots(figsize=(12,8))
mesh = ax.matshow(w, cmap='seismic')
for (i, j), z in np.ndenumerate(b):
ax.text(j, i, '{}'.format(z), ha='center', va='center', fontsize=4, bbox=dict(boxstyle='round', facecolor='white', edgecolor='0.3'))
plt.colorbar(mesh, ax=ax, fraction=0.02, pad=0.04) # arguments shrink the colorbar
# Remember: save as pdf and transparent=True for Adobe Illustrator
if not os.path.exists(PLOTS_DIR):
os.makedirs(PLOTS_DIR)
plt.savefig(os.path.join(PLOTS_DIR, 'ft'+str(periodid)+'_heatmap_1.png'), transparent=True, dpi=300)
# plt.savefig(os.path.join(PLOTS_DIR, 'ft'+str(periodid)+'_heatmap_1.pdf'), transparent=True, dpi=800)
plt.close()
###Output
_____no_output_____
###Markdown
 Plottet viser i rødt de afstemninger som (hvis vedtaget) har rykket Danmark mod venstre, og i blåt de afstemninger som (hvis vedtaget) har rykket Danmark mod højre. Det er tydeligt at der er stor forskel på, hvor meget en bestemt afstemning betyder for denne første komponent. Vi kan prøve at printe resumeet for de fem vigtigste afstemninger i hver retning (højre og venstre), og se om de også repræsenterer politiske temaer, som vi typisk forbinder med en højre- og venstrefløj:
###Code
#hide
import pprint
#collapse-hide
print('Her de mest polariserende afstemninger som de røde (dvs. venstrefløjen + S) stemte for og vandt:\n')
for lov in bills[ix_high]:
print('afstemnings-id', lov)
pprint.pprint(df[df.afstemning == lov].titel.unique()[0])
pprint.pprint(df[df.afstemning == lov].konklusion.unique()[0])
pprint.pprint(df[df.afstemning == lov].resume.unique()[0])
print('\n\n')
#collapse-hide
print('Her de mest polariserende afstemninger som de blå (dvs. højrefløjen) stemte for og tabte:\n')
for lov in bills[ix_low]:
print('afstemnings-id', lov)
pprint.pprint(df[df.afstemning == lov].titel.unique()[0])
pprint.pprint(df[df.afstemning == lov].konklusion.unique()[0])
pprint.pprint(df[df.afstemning == lov].resume.unique()[0])
print('\n\n')
###Output
Her de mest polariserende afstemninger som de blå (dvs. højrefløjen) stemte for og tabte:
afstemnings-id 7414
('Forslag til lov om ændring af lov om afgift af tinglysning af ejer- og '
'panterettigheder m.v. (tinglysningsafgiftsloven), emballageafgiftsloven, lov '
'om afgift af bekæmpelsesmidler og forskellige andre love. (Indeksering af de '
'faste tinglysningsafgifter og en række miljøafgifter og genindførelse af '
'registreringsafgiften på luftfartøjer m.v.).')
('Forslaget blev forkastet. For stemte 50 (V, DF, KF, NB og LA), imod stemte '
'59 (S, RV, SF, EL og ALT), hverken for eller imod stemte 0.')
('Med lovforslaget foreslås det at indeksere afgifterne på tinglysning, '
'råstoffer, emballager, bekæmpelsesmidler og spildevand frem til 2025.\n'
'\n'
'Indekseringen foreslås indført ved to satsforhøjelser i perioden 2020-2025. '
'Afgifterne på råstoffer og tinglysning forhøjes i 2020 og 2023, og '
'afgifterne på emballager, bekæmpelsesmidler og spildevand forhøjes i 2021 og '
'2024. Afgifterne foreslås forhøjet med 5,5 pct. pr. gang.\n'
'\n'
'Desuden foreslås en registreringsafgift på luftfartøjer fra og med den 1. '
'januar 2021, således at der indføres en afgift for registrering af '
'ejerrettigheder over fly på 0,1 pct. af flyets værdi og en afgift for '
'registrering af pantrettigheder i fly på 0,1 pct. af pantets værdi. For '
'registrering af pant i fly, der vejer under 5.700 kg, eller som er '
'registreret godkendt til højst 10 passagerer, er afgiften på 1,5 pct. af '
'pantets værdi.\n'
'\n'
'Lovforslaget udmønter dele af aftale om finansloven for 2020 indgået den 2. '
'december 2019 mellem regeringen (Socialdemokratiet), Radikal Venstre, '
'Socialistisk Folkeparti, Enhedslisten og Alternativet.\n')
afstemnings-id 7248
('Forslag til lov om ændring af lov om Arbejdsgivernes Uddannelsesbidrag. '
'(Modelparametre for erhvervsuddannelser til brug for beregning af '
'praktikpladsafhængigt arbejdsgiverbidrag for 2020 og justering af det '
'aktivitetsafhængige VEU-bidrag for 2020 m.v.).')
('Forslaget blev forkastet. For stemte 51 (V, DF, KF, NB og LA), imod stemte '
'61 (S, RV, SF, EL og ALT), hverken for eller imod stemte 0.')
('Forslaget vedrører det praktikpladsafhængige AUB-bidrag, der blev aftalt ved '
'”Trepartsaftale om tilstrækkelig og kvalificeret arbejdskraft i hele Danmark '
'og praktikpladser” fra 2016, og det aktivitetsafhængige VEU-bidrag, der blev '
'aftalt ved ”Trepartsaftalen om styrket og mere fleksibel voksen-, efter- og '
'videreuddannelse (2018-2021)” fra 2017. \n'
'\n'
'I lov om Arbejdsgivernes Uddannelsesbidrag er det forudsat, at der årligt '
'ved lov skal ske en fastsættelse af modelparametrene i det '
'praktikpladsafhængige AUB-bidrag og af bidragssatsen i det '
'aktivitetsafhængige VEU-bidrag. \n'
'\n'
'Forslaget har derfor til formål at fastsætte de årlige modelparametre i det '
'praktikpladsafhængige AUB-bidrag for de enkelte erhvervsuddannelser for 2020 '
'i bilag 1 til lov om Arbejdsgivernes Uddannelsesbidrag og at indføre den '
'årlige tilpasning af det aktivitetsafhængige VEU-bidrag for 2020.')
afstemnings-id 7385
('Forslag til lov om ændring af aktieavancebeskatningsloven og '
'dødsboskatteloven. (Ophævelse af hovedaktionærnedslaget).')
('Forslaget blev forkastet. For stemte 47 (V, DF, KF, NB og LA), imod stemte '
'57 (S, RV, SF, EL og ALT), hverken for eller imod stemte 0.')
('Med lovforslaget foreslås det, at ophæve det særlige nedslag i den '
'skattepligtige fortjeneste, som kan opnås ved afståelse af '
'hovedaktionæraktier, der er erhvervet før den 19. maj 1993.')
afstemnings-id 7237
('Forslag til lov om ændring af pensionsbeskatningsloven, '
'pensionsafkastbeskatningsloven, selskabsskatteloven og forskellige andre '
'love. (Videregivelse af oplysninger om diskvalificerende '
'pensionsudbetalinger, smidiggørelse af regler for flytning af '
'pensionsindbetalinger, justering af reglerne om omdannelse af pensionskasser '
'til livsforsikringsselskaber og goodwillbeskatning m.v.).')
('Forslaget blev forkastet. For stemte 50 (V, DF, KF, NB og LA), imod stemte '
'62 (S, RV, SF, EL og ALT), hverken for eller imod stemte 0.')
('Formålet med forslaget er at foretage en række tekniske justeringer på '
'pensionsbeskatningsområdet. \n'
'\n'
'Forslaget indeholder bl.a. adgang til oplysninger om diskvalificerende '
'udbetalinger for pensionsinstitutter, smidigere regler for flytning af '
'pensionsindbetalinger mellem forskellige pensionsordninger og justeringer af '
'reglerne for skattefri omstruktureringer af pensionsinstitutter. ')
afstemnings-id 7278
'Forslag til finanslov for finansåret 2020.'
('Forslaget blev forkastet. For stemte 50 (V, DF, KF, NB og LA), imod stemte '
'61 (S, RV, SF, EL, ALT og SIU), hverken for eller imod stemte 0.')
('Finanslovforslaget fastlægger størrelsen og fordelingen af de samlede '
'statslige udgifter og indtægter for finansåret 2020. Lovforslaget indeholder '
'desuden overslag over statens udgifter for de efterfølgende 3 år.')
###Markdown
Så hvad viser dette? Højre-venstrefløjs-aksen handler om emner som ordentlige arbejdsforhold hos chauffører og vognmænd, om børnebidrag til enlige forsørgere, om afskaffelse af opholdskrav for ret til dagpenge, samt finanslovsting. Altså ting man kunne forvente at der var uenighed om mellem de to fløje, og som venstrefløjen + S fik igennem fordi de har flertal.Hvad så med y-aksen, altså bunden vs. toppen i figuren? Her skal vi kigge på den anden komponent, og vi starter igen med at tegne et heatmap:
###Code
#collapse-show
# nu det samme for anden component:
w = np.append(yvector, [0])
w = w.reshape(12,27)
fig, ax = plt.subplots(figsize=(12,8))
mesh = ax.matshow(w, cmap='PiYG')
for (i, j), z in np.ndenumerate(b):
ax.text(j, i, '{}'.format(z), ha='center', va='center', fontsize=4, bbox=dict(boxstyle='round', facecolor='white', edgecolor='0.3'))
plt.colorbar(mesh, ax=ax, fraction=0.02, pad=0.04)
# Remember: save as pdf and transparent=True for Adobe Illustrator
if not os.path.exists(PLOTS_DIR):
os.makedirs(PLOTS_DIR)
plt.savefig(os.path.join(PLOTS_DIR, 'ft'+str(periodid)+'_heatmap_2.png'), transparent=True, dpi=300)
# plt.savefig(os.path.join(PLOTS_DIR, 'ft'+str(periodid)+'_heatmap_2.pdf'), transparent=True, dpi=800)
plt.close()
###Output
_____no_output_____
###Markdown
 Vi printer igen de fem vigtigste afsteninger i hver sin retning (altså op og ned):
###Code
#collapse-hide
iyhigh = np.argsort(yvector)[-5:] # returns an array of sorted indexes of the components
print('Her de mest polariserende afstemninger som de grønne (dvs. bunden) stemte for og tabte:\n')
for lov in bills[iyhigh]:
print('afstemnings-id', lov)
pprint.pprint(df[df.afstemning == lov].titel.unique()[0])
pprint.pprint(df[df.afstemning == lov].konklusion.unique()[0])
pprint.pprint(df[df.afstemning == lov].resume.unique()[0])
print('\n\n')
#collapse-hide
iylow = np.argsort(yvector)[:5]
print('Her de mest polariserende afstemninger som de lilla (dvs. toppen) stemte for og vandt:\n')
for lov in bills[iylow]:
print('afstemnings-id', lov)
pprint.pprint(df[df.afstemning == lov].titel.unique()[0])
pprint.pprint(df[df.afstemning == lov].konklusion.unique()[0])
pprint.pprint(df[df.afstemning == lov].resume.unique()[0])
print('\n\n')
###Output
Her de mest polariserende afstemninger som de lilla (dvs. toppen) stemte for og vandt:
afstemnings-id 7407
('Forslag til lov om ændring af lov om social service. (Ro og stabilitet for '
'udsatte børn og unge og fuldbyrdelse af tvangsmæssige afgørelser om ændret '
'anbringelsessted uden samtykke).')
('Forslaget blev vedtaget. For stemte 73 (S, V, RV og KF), imod stemte 35 (DF, '
'SF, EL, ALT, NB og LA), hverken for eller imod stemte 0.')
('Med lovforslaget ændres lov om social service. Det foreslås blandt andet, at '
'kommunerne får adgang til at fuldbyrde afgørelser om ændret anbringelsessted '
'uden samtykke. Hvis denne fuldbyrdelse sker med bistand fra politiet, skal '
'den registreres og indberettes af kommunen til Ankestyrelsen. Det forslås '
'videre, at fuldbyrdelse af afgørelser skal ske med henblik på at sikre '
'barnets eller den unges bedste.')
afstemnings-id 7363
('Folketinget noterer sig, at der har været politiske drøftelser i '
'forligskredsen med henblik på at ændre offentlighedsloven. Folketinget '
'noterer sig endvidere, at det i forligskredsen ikke har været muligt at nå '
'til enighed om en ændring af offentlighedsloven. Endelig noterer Folketinget '
'sig, at justitsministeren ikke finder behov for at genåbne drøftelserne om '
'at ændre offentlighedsloven.')
('Forslaget blev vedtaget. For stemte 66 (S, V, Liselott Blixt (DF) og KF), '
'imod stemte 44 (DF, RV, SF, EL, ALT, NB og LA), hverken for eller imod '
'stemte 0.')
''
afstemnings-id 7326
('Forslag til lov om ændring af lov om offentlighed i forvaltningen. '
'(Ophævelse af revisionsbestemmelse).')
('Forslaget blev vedtaget. For stemte 68 (S, V, KF og LA), imod stemte 46 (DF, '
'RV, SF, EL, ALT, NB og UFG), hverken for eller imod stemte 0.')
('Det følger af offentlighedslovens § 44, at justitsministeren i '
'folketingsåret 2018-19 skulle fremsætte lovforslag om revision af '
'offentlighedslovens § 16 om postlister. Ved en postliste forstås en '
'fortegnelse over dokumenter, der den pågældende dag er modtaget i eller '
'afsendt af myndigheden. \n'
'\n'
'Som opfølgning på den generelle evaluering af offentlighedsloven har der '
'været forligskredsdrøftelser om en ny aftale om offentlighedsloven, og bl.a. '
'af den grund er revisionen af postlistebestemmelsen blevet udskudt i 2017 og '
'i 2018. \n'
'\n'
'Da det ikke har været muligt at opnå enighed i forligskredsen om at ændre '
'offentlighedsloven, herunder om at indføre en postlisteordning, foreslås '
'det, at revisionsbestemmelsen i offentlighedslovens § 44 ophæves. \n'
'\n'
'Lovforslaget er en genfremsættelse af L 176 (2018-19, 1. samling).\n'
'\n'
'Det foreslås, at loven træder i kraft den 1. januar 2020.\n')
afstemnings-id 7341
('Forslag til lov om ændring af skatteforvaltningsloven. (Afledt skattemæssig '
'virkning ved ekstraordinær genoptagelse af ejendomsvurderinger).')
('Forslaget blev vedtaget. For stemte 78 (S, V, RV, SF, KF, ALT og LA), imod '
'stemte 22 (DF, EL og NB), hverken for eller imod stemte 0.')
('Lovforslaget handler om de dele af lovforslag nr. L 71, der omhandler afledt '
'skattemæssig virkning ved ekstraordinær genoptagelse af ejendomsvurderinger.')
afstemnings-id 7485
('Forslag til lov om ændring af aktiesparekontoloven. (Forhøjelse af loftet '
'for indskud på aktiesparekontoen).')
('Forslaget blev vedtaget. For stemte 71 (S, V, RV, KF, NB og LA), imod stemte '
'25 (DF, SF, EL, ALT og UFG), hverken for eller imod stemte 0.')
('Det foreslås med lovforslaget at forhøje loftet for indskud på '
'aktiesparekontoen fra 50.000 kr. i 2019 til 100.000 kr. i 2020.')
###Markdown
Hvad viser dette? De mest polariserende afstemninger mellem top og bund handler om ting som offentlighedloven (bunden vil have den ændret, toppen ikke vil), mere dyrevelfærd (toppen vil ikke), afskaffelse af EU-privilegier (toppen vil ikke), ligeberettigelse til social pension (toppen vil ikke), fjernelse af politikeres frynsegoder (toppen vil ikke), og om imødekommelse af borgerforslag (toppen vil ikke). Blandt de emner som blev gennemført, men bunden stemte imod er nye regler for tvangsanbringelser, indvandrere, og diverse økonomiske regler (DF hhv EL er dog typisk langt fra hinanden i disse sager), og igen en modstand mod at udskyde ændring af offentlighedloven.Alt i alt kan man måske sige at bund versus top i Dansk politik handler om dem der vil fjerne privilegier fra et bestemt 'establishment', og dem der vil beholde privilegierne. Der er dog tale om forskellige typer af 'establishment'. Nogle gange er det den politiske elite vs. resten, nogle gange er det EU vs. resten af verden, nogle gange er det mennesker vs. dyr, og nogle gange er det de rige vs. de ikke-så-rige. Den analyse passer også godt overens med Aarhus Universitets magtudredning ["De Folkevalgte"](https://unipress.dk/media/14491/87-7934-794-0_de_folkevalgte.pdf) fra 2004, hvori der står at "i modsætning til alle de andre partier ser 'midterpartierne' (*sic - her menes S + V + KF*)ikke et behov for at ændre magtforholdene i samfundet." (s. 246).Måske kan man også med [Larry Summers ord](https://aciddc.wordpress.com/2017/05/05/larry-summers-wants-to-know-are-you-an-insider-or-an-outsider/) sige, at den næst-vigtigste skillelinje i dansk politik, efter højre-venstre opdelingen, er den der adskiller 'insiders' fra 'outsiders'. Outsiders er de frie mennesker, der råber op og siger hvad de har lyst til, men ikke bestemmer noget som helst. Insiders er dem der kun siger det, der er accepteret at sige som insider, og de lytter ikke til outsiderne (som de kalder ekstremister). Til gengæld får insiderne lov til at tage alle de vigtige beslutninger.
###Code
#hide
#Vi kan for helheden skyld også lige tage et kig på den tredje komponent i pca'en, som i nedenstående plot er vist på y-aksen sammen med den første komponent langs x-aksen:
fig, ax = plt.subplots()
ix_high = np.argsort(xvector)[-5:] # returns an array of sorted indexes of the components
ix_low = np.argsort(xvector)[:5]
iz_high = np.argsort(zvector)[-5:] # returns an array of sorted indexes of the components
iz_low = np.argsort(zvector)[:5]
ix_highest_and_lowest_comps = np.append(ix_high, ix_low)
iz_highest_and_lowest_comps = np.append(iz_high, iz_low)
def get_arr_index_colors(color):
# returns an array of indexes in the colors array corresponding to a certain party with color "color"
col_mask = np.where(np.array(colors) == color,True,False)
col_index = np.arange(0, len(colors))[col_mask]
return col_index
for color in np.unique(colors):
ix_color = get_arr_index_colors(color)
ax.scatter(xs[ix_color], zs[ix_color], c = color, label = list(color_dict.keys())[list(color_dict.values()).index(color)])
for i in ix_color:
ax.annotate(dp.iloc[i]['navn'], (xs[i], zs[i]), fontsize=2)
for i in ix_highest_and_lowest_comps:
# arrows project features as vectors onto PC axes
plt.arrow(0, 0, xvector[i]*max(xs)*2, zvector[i]*max(zs)*2,
color='grey', width=0.0005, head_width=0.005)
texts = [plt.text(xvector[i]*max(xs)*2.2, zvector[i]*max(zs)*2.2,
list(dat.columns.values)[i], color='black', fontsize=3)]
for i in iz_highest_and_lowest_comps:
# arrows project features as vectors onto PC axes
plt.arrow(0, 0, xvector[i]*max(xs)*2, zvector[i]*max(zs)*2,
color='grey', width=0.0005, head_width=0.005)
plt.text(xvector[i]*max(xs)*2.2, zvector[i]*max(zs)*2.2,
list(dat.columns.values)[i], color='black', fontsize=3)
plt.scatter(0,0, color='white', s=4, zorder=20)
adjust_text(texts)
lgd = ax.legend(title=str(num_bills)+' afstemninger', prop={'size': 10}, bbox_to_anchor=(1.05, 1))
ax.set_title('Folketingsperiode ' + period_txt, fontsize=14)
# invert the x-axis so that the "left wing" goes to the left and the "right wing" to the right. First grab a reference to the current axes and then set the xlimits to be the reverse of the current xlimits
ax = plt.gca()
ax.set_xlim(ax.get_xlim()[::-1]) # vi vender akserne om, så det matcher med det visuelle udtryk om at venstrefløjen er på venstre side og højrefløjen på højre side.
#ax.set_ylim(ax.get_ylim()[::-1])
#ax.set_xlim([25,-15])
#ax.set_ylim([12.5,-12.5])
#plt.tight_layout()
# Remember: save as pdf and transparent=True for Adobe Illustrator
if not os.path.exists(PLOTS_DIR):
os.makedirs(PLOTS_DIR)
plt.savefig(os.path.join(PLOTS_DIR, 'ft'+str(periodid)+'_xz.png'), bbox_extra_artists=(lgd,), bbox_inches='tight', transparent=True, dpi=800)
plt.savefig(os.path.join(PLOTS_DIR, 'ft'+str(periodid)+'_xz.pdf'), bbox_extra_artists=(lgd,), bbox_inches='tight', transparent=True, dpi=800)
plt.close()
#
#Og igen analyserer, hvilke afstemninger der er væsentlige for denne trejde akse, så vil man kunne se, at de primært handler finanslovsforslag som V og K (og nogle gange LA) stemte imod. Så den tredje-største skillelinje i dansk politik anno 2020 går mellem højrefløjspartier som stemte imod finansloven 2020 (V + K, og delvist LA) og højrefløjspartier som stemte for finansloven 2020 (DF og NB).
#hide
# nu det samme for anden component:
w = np.append(zvector, [0])
w = w.reshape(12,27)
fig, ax = plt.subplots(figsize=(12,8))
mesh = ax.matshow(w, cmap='BrBG')
for (i, j), z in np.ndenumerate(b):
ax.text(j, i, '{}'.format(z), ha='center', va='center', fontsize=4, bbox=dict(boxstyle='round', facecolor='white', edgecolor='0.3'))
plt.colorbar(mesh, ax=ax, fraction=0.02, pad=0.04)
# Remember: save as pdf and transparent=True for Adobe Illustrator
if not os.path.exists(PLOTS_DIR):
os.makedirs(PLOTS_DIR)
plt.savefig(os.path.join(PLOTS_DIR, 'ft'+str(periodid)+'_heatmap_3.png'), transparent=True, dpi=300)
# plt.savefig(os.path.join(PLOTS_DIR, 'ft'+str(periodid)+'_heatmap_2.pdf'), transparent=True, dpi=800)
plt.close()
#
#hide
#izhigh = np.argsort(zvector)[-5:] # returns an array of sorted indexes of the components
#print('Her de mest polariserende afstemninger som de brune (dvs. bunden) stemte for og vandt:\n')
#for lov in bills[izhigh]:
#print('afstemnings-id', lov)
#pprint.pprint(df[df.afstemning == lov].titel.unique()[0])
#pprint.pprint(df[df.afstemning == lov].konklusion.unique()[0])
#pprint.pprint(df[df.afstemning == lov].resume.unique()[0])
#print('\n\n')
#hide
#izlow = np.argsort(zvector)[:5]
#print('Her de mest polariserende afstemninger som de turkise (dvs. toppen) stemte for og tabte:\n')
#for lov in bills[izlow]:
#print('afstemnings-id', lov)
#pprint.pprint(df[df.afstemning == lov].titel.unique()[0])
#pprint.pprint(df[df.afstemning == lov].konklusion.unique()[0])
#pprint.pprint(df[df.afstemning == lov].resume.unique()[0])
#print('\n\n')
###Output
_____no_output_____ |
notebooks/Methods/test_gap_wwz.ipynb | ###Markdown
Generates a signal with a gap and 20% deleted points
###Code
def generate_signal(gap_length,kind):
freqs=[1/20,1/80]
time=np.arange(2001)
signals=[]
for freq in freqs:
signals.append(np.cos(2*np.pi*freq*time))
signal=sum(signals)
slope = 1e-5
intercept = -1
nonlinear_trend = slope*time**2 + intercept
signal_trend = signal + nonlinear_trend
sig_var = np.var(signal)
noise_var = sig_var / 2 #signal is twice the size of noise
white_noise = np.random.normal(0, np.sqrt(noise_var), size=np.size(signal))
signal_noise = signal_trend + white_noise
nt = np.size(time)
deleted_idx =None
if kind=='even':
deleted_idx = np.arange(nt//2-gap_length//2, nt//2+gap_length//2)
print(deleted_idx)
else:
start = 160
end = end = start+gap_length
deleted_idx = np.arange(start,end)
signal_unevenly = np.delete(signal_noise, deleted_idx)
time_unevenly = np.delete(time,deleted_idx)
n_del = math.floor(0.2*np.size(time))
deleted_idx = np.random.choice(range(np.size(time_unevenly)), n_del, replace=False)
signal_unevenly = np.delete(signal_unevenly, deleted_idx)
time_unevenly = np.delete(time_unevenly,deleted_idx)
#print(len(signal_unevenly),len(time_unevenly))
ts= pyleo.Series(time_unevenly,signal_unevenly)
return ts
#gap_length = [100,200,400,600,800]
###Output
_____no_output_____
###Markdown
Preprocessing (Standardizing and Detrending)
###Code
def preprocess(ts):
ts_std = ts.standardize()
ts_detrend = ts_std.detrend(method='emd')
return ts_detrend
###Output
_____no_output_____
###Markdown
Spectral analysis using wwz
###Code
def spectral(ts):
psd_wwz = ts.spectral(method='wwz')
psd_signif = psd_wwz.signif_test(qs=[0.95])
amplitude= None
for p in psd_signif.signif_qs.psd_list:
amplitude = p.amplitude
a,b,c = cost_function(psd_wwz.__dict__,[1/20,1/80],amplitude)
print(a,b,c)
fig, ax = psd_signif.plot(title='wwz analysis')
###Output
_____no_output_____
###Markdown
Cost Function to detect correct number of peaks
###Code
def cost_function(res_psd,actual_freqs,signif_qs_psd_amplitude,dist_tol=0,peak_tol=0):
#num_peaks= number of actual peaks in the frequency
#tol = tolerance, if inaccuracy is less than tol, then return 0
'''
1. find all peaks
2. calc cost function for num_peaks, find peaks closest to actual peak.
3.
#rank by correct num peaks, distance, height/width ratio
#try instead of adding distance, try normalized mean of distances
'''
correct_num_peaks=True
peaks,h=find_peaks(res_psd['amplitude'],height=0)
height_tol=peak_tol*mean(h['peak_heights'])
prom,_,__=peak_prominences(res_psd['amplitude'],peaks)
prom_thresh=mean(prom)*peak_tol
peaks,props=find_peaks(res_psd['amplitude'],prominence=prom_thresh,height=height_tol)
if len(peaks) < len(actual_freqs):
correct_num_peaks=False
widths=np.array(peak_widths(res_psd['amplitude'],peaks,rel_height=0.99)[0])
#only consider peaks clostest to actual freqs, need te do bipartite matching (using linear sum assignment func)
#assignment problem between peaks and actual_freqs
#create cost matrix, rows=peaks, cols= actual_freq, cost= dist
temp_combs=np.array(list(itertools.product(res_psd['frequency'][peaks],actual_freqs)))
#print(temp_combs)
dist=lambda x,y:abs(x-y)
optimum = []
l = res_psd['frequency'][peaks]
cost=dist(temp_combs[:,0],temp_combs[:,1]).reshape(-1,len(actual_freqs)) #rows = peak,
row_ind,col_ind=linear_sum_assignment(cost)
dists=np.mean(cost[row_ind,col_ind],dtype=float)
peakidx=row_ind
peak_amplitude = []
detected_amplitude = []
indexes = []
#from the index finding the frequency closest to actual frequency of psd and then extracting the corresponding
# amplitude
for idx in peakidx:
indexes.append(np.where(res_psd['frequency']==l[idx])[0][0])
x = np.where(res_psd['frequency']==l[idx])[0][0]
peak_amplitude.append(res_psd['amplitude'][x])
# extracting amplitude of 95% series at peak
for idx in indexes:
detected_amplitude.append(signif_qs_psd_amplitude[idx])
peak_heights=props['peak_heights'][peakidx]
flag = True
for i in range(len(peak_amplitude)):
if peak_amplitude[i] < detected_amplitude[i]:
flag = False
break
avg_height_width_ratio=mean([peak_height/widths[i] for i,peak_height in enumerate(peak_heights)])
res = None
if flag ==True and correct_num_peaks==True:
res = 2
elif flag==True and correct_num_peaks == False or flag==False and correct_num_peaks==True:
res = 1
else:
res = 0
#dist tol is an accuracy tolerance for distance of peak to actual freq
if dists<dist_tol:
dists=0
return (correct_num_peaks,avg_height_width_ratio,res)
###Output
_____no_output_____
###Markdown
Wavelet analysis using wwz
###Code
def wavelet_analysis(ts):
wwz_res=ts.wavelet(method='wwz',settings={})
wwz_signif=wwz_res.signif_test(qs=[0.95])
fig,ax=wwz_signif.plot(title='wwz analysis')
plt.show()
###Output
_____no_output_____
###Markdown
Adding Gap close to edge Gap Length = 200
###Code
ts = generate_signal(200,'uneven')
ts.plot()
ts = preprocess(ts)
spectral(ts)
#wavelet_analysis(ts)
###Output
_____no_output_____
###Markdown
Gap Length = 400
###Code
ts = generate_signal(400,'uneven')
ts.plot()
ts = preprocess(ts)
spectral(ts)
#wavelet_analysis(ts)
###Output
_____no_output_____
###Markdown
Gap Length = 600
###Code
ts = generate_signal(600,'uneven')
ts.plot()
ts_segment = preprocess(ts)
spectral(ts_segment)
wavelet_analysis(ts)
ts = generate_signal(800,'uneven')
ts.plot()
ts_segment = preprocess(ts)
spectral(ts_segment)
wavelet_analysis(ts)
ts = generate_signal(1000,'uneven')
ts.plot()
ts_segment = preprocess(ts)
spectral(ts_segment)
wavelet_analysis(ts)
###Output
_____no_output_____ |
notebook/model-train-keras-seq.ipynb | ###Markdown
Import
###Code
import os
import numpy as np
import pandas as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import display
from PIL import Image
from keras.preprocessing.image import ImageDataGenerator, load_img
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K
###Output
_____no_output_____
###Markdown
First steps
###Code
DATA_PATH = 'data'
dog_img_name = os.path.join(DATA_PATH, 'train/dog/1.jpg')
dog_img = plt.imread(dog_img_name)
plt.imshow(dog_img)
plt.show()
cat_img_name = os.path.join(DATA_PATH, 'train/cat/1.jpg')
cat_img = plt.imread(cat_img_name)
plt.imshow(cat_img)
plt.show()
###Output
_____no_output_____
###Markdown
Train preparation
###Code
# dimensions of our images.
img_width, img_height = 150, 150
epochs = 20
batch_size = 16
train_data_dir = os.path.join(DATA_PATH, 'train')
validation_data_dir = os.path.join(DATA_PATH, 'validation')
test_data_dir = os.path.join(DATA_PATH, 'test')
nb_train_samples = len(os.listdir(os.path.join(DATA_PATH, 'train/dog'))) + len(os.listdir(os.path.join(DATA_PATH, 'train/cat')))
nb_validation_samples = len(os.listdir(os.path.join(DATA_PATH, 'validation/dog'))) + len(os.listdir(os.path.join(DATA_PATH, 'validation/cat')))
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
###Output
_____no_output_____
###Markdown
Create neural network
###Code
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Data preparation
###Code
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
test_generator = test_datagen.flow_from_directory(
test_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
###Output
Found 2500 images belonging to 2 classes.
###Markdown
Training
###Code
history = model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
###Output
/var/folders/2x/6b0z8df95zx4kcymdzxw3tt40000gp/T/ipykernel_95185/818546861.py:1: UserWarning: `Model.fit_generator` is deprecated and will be removed in a future version. Please use `Model.fit`, which supports generators.
history = model.fit_generator(
###Markdown
Model evaluation
###Code
STEP_SIZE_TEST = test_generator.n // test_generator.batch_size
scores = model.evaluate_generator(test_generator, steps=STEP_SIZE_TEST)
print("\n%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
history.history.keys()
# Plot training & validation accuracy values
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
if not os.path.exists(os.path.join(DATA_PATH, 'model')):
os.makedirs(os.path.join(DATA_PATH, 'model'))
model.save(os.path.join(DATA_PATH, 'model/keras_seq_model'))
###Output
2021-11-14 12:31:39.314781: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
|
Final_project (3).ipynb | ###Markdown
Argentina Covid-19 Data Covid-19 Background Covid-19 is a pandemic that hit the world in 2020. This sickness has infected millions of people in the last few years and has even caused some to have lost their lives. This notebook will analyze the COVID-19 data in Argentina. Author Info Abbey Glavin is an Intelligence Analysis Student at James Madison University The data is downloaded and collected from [European Centre for Diease Prevention and Control]('https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide') Data Source
###Code
%matplotlib inline
import pandas
###Output
_____no_output_____
###Markdown
Argentina Data
###Code
df = pandas.read_excel('s3://ia241-spring2022-glavin/covid_data.xls')
argentina_data = df.loc[df['countriesAndTerritories'] == 'Argentina']
argentina_data[:10] #top 10 rows
###Output
_____no_output_____
###Markdown
This is an overview of the Covid-19 Data that has been collected from Argentina. This shows the top 10 rows of Argenina's statistics and has been imported from my S3 Bucket. Questions this Project Will Answer 1. Which month did Argentina have the most deaths in 2020?2. Which month did Argentina have the most cases in 2020?3. How the number of cases is related to the number of deaths in 2020. 1.
###Code
sum_deaths_per_month = argentina_data.groupby('month').sum()['deaths']
sum_deaths_per_month.plot()
###Output
_____no_output_____
###Markdown
This plot shows that there were the most Covid-190 deaths in October (month 10). 2.
###Code
sum_cases_per_month = argentina_data.groupby('month').sum()['cases']
sum_cases_per_month.plot()
###Output
_____no_output_____
###Markdown
This plot shows that there were the most Covid-19 cases reported in October (month 10) 3.
###Code
argentina_data.plot.scatter(x = 'cases', y = 'deaths', c='month')
###Output
_____no_output_____ |
notebooks/Manual/Manual_How_to_use_learners.ipynb | ###Markdown
How to use learnersIn CNTK, learners are implementations of gradient-based optimization algorithms. CNTK automatically computes the gradient of your criterion/loss with respect to each learnable parameter but how this gradient is combined with the current parameter value to provide a new parameter value is left to the learner. CNTK provides three ways to define your learner, which we describe in detail in this notebook. You can- Use a built-in learner. Built-in learners are very fast.- Define your learner as a CNTK expression. This is not as fast as the built-in learners but more flexible. - Define your learner as a Python function. This is even more flexible but even less fast. Here's a "hello world" example for learners.
###Code
import cntk as C
import numpy as np
import math
np.set_printoptions(precision=4)
features = C.input_variable(3)
label = C.input_variable(2)
z = C.layers.Sequential([C.layers.Dense(4, activation=C.relu), C.layers.Dense(2)])(features)
lr_schedule_m = C.learning_rate_schedule(0.5, C.UnitType.minibatch)
lr_schedule_s = C.learning_rate_schedule(0.5, C.UnitType.sample)
sgd_learner_m = C.sgd(z.parameters, lr_schedule_m)
sgd_learner_s = C.sgd(z.parameters, lr_schedule_s)
###Output
_____no_output_____
###Markdown
We have created two learners here. When creating a learner we have to specify a learning rate schedule, which can be as simple as specifying a single number (0.5 in this example) or it can be a list of learning rates that specify what the learning rate should be at different points in time. Currently, the best results with deep learning are obtained by having a small number of *phases* where inside each phase the learning rate is fixed and the learning rate decays by a constant factor when moving between phases. We will come back to this point later.The second parameter in the learning rate schedule can be one of two different value:- Per minibatch- Per sampleTo understand the difference and get familiar with the learner properties and methods, let's write a small function that inspects the effect of a learner on the parameters assuming the parameters are all 0 and the gradients are all 1.
###Code
def inspect_update(learner, mbsize, count=1):
# Save current parameter values
old_values = [p.value for p in learner.parameters]
# Set current parameter values to all 0
for p in learner.parameters:
p.value = 0 * p.value
# create all-ones gradients and associate them with the parameters
updates = {p: p.value + 1 for p in learner.parameters}
# do 'count' many updates
for i in range(count):
learner.update(updates, mbsize)
ret_values = [p.value for p in learner.parameters]
# Restore values
for p, o in zip(learner.parameters, old_values):
p.value = o
return ret_values
print('\nunit = minibatch\n', inspect_update(sgd_learner_m, mbsize=2))
###Output
unit = minibatch
[array([[-0.25, -0.25],
[-0.25, -0.25],
[-0.25, -0.25],
[-0.25, -0.25]], dtype=float32), array([-0.25, -0.25], dtype=float32), array([[-0.25, -0.25, -0.25, -0.25],
[-0.25, -0.25, -0.25, -0.25],
[-0.25, -0.25, -0.25, -0.25]], dtype=float32), array([-0.25, -0.25, -0.25, -0.25], dtype=float32)]
###Markdown
With the knowledge that SGD is the update `parameter = old_parameter - learning_rate * gradient`, we can conclude that when the learning rate schedule is per minibatch, the learning rate is divided by the minibatch size. Let's see what happens when the learning rate schedule is per sample.
###Code
print('\nunit = sample\n', inspect_update(sgd_learner_s, mbsize=2))
###Output
unit = sample
[array([[-0.5, -0.5],
[-0.5, -0.5],
[-0.5, -0.5],
[-0.5, -0.5]], dtype=float32), array([-0.5, -0.5], dtype=float32), array([[-0.5, -0.5, -0.5, -0.5],
[-0.5, -0.5, -0.5, -0.5],
[-0.5, -0.5, -0.5, -0.5]], dtype=float32), array([-0.5, -0.5, -0.5, -0.5], dtype=float32)]
###Markdown
In the per sample specification, the learning rate is not divided by the minibatch size. CNTK offers both options because in some setups it is more convenient to work with per sample learning rates than per minibatch learning rates and vice versa. **Key concept**: It is important to understand the ramifications of choosing learning rates per minibatch vs per sample. For example, per minibatch learning rate schedules, typically don't require retuning when you want to change the minibatch size, but per sample schedules do. On the other hand with distributed training it is more accurate to specify the learning rate schedule as per sample rather than per minibatch.Calling update manually on the learner (as `inspect_update` does) is very tedious and not recommended. Besides, you need to compute the gradients separately and pass them to the learner. Instead, using a [**`Trainer`**](https://www.cntk.ai/pythondocs/cntk.train.trainer.htmlmodule-cntk.train.trainer), you don't have to do any of that. The manual update used here is for educational purposes and for the vast majority of use cases CNTK users should avoid performing manual updates. Trainers and LearnersA closely related class to the `Learner` is the `Trainer`. In CNTK a `Trainer` brings together all the ingredients necessary for training models:- the model itself- the loss function (a differentiable function) and the actual metric we care about which is not necessarily differentiable (such as error rate)- the learners- optionally progress writers that log the training progressWhile in the most typical case a `Trainer` has a single learner that handles all the parameters, it is possible to have **multiple learners** each working on a different subset of the parameters. Parameters that are not covered by any learner **will not** be updated. Here is an example that illustrates typical use.
###Code
lr_schedule = C.learning_rate_schedule([0.05]*3 + [0.025]*2 + [0.0125], C.UnitType.minibatch, epoch_size=100)
sgd_learner = C.sgd(z.parameters, lr_schedule)
loss = C.cross_entropy_with_softmax(z, label)
trainer = C.Trainer(z, loss, sgd_learner)
# use the trainer with a minibatch source as in the trainer howto
###Output
_____no_output_____
###Markdown
The trainer will compute the gradients of `loss` with respect to the parameters of `z` and call the sgd_learner's update method as we did manually in the `inspect_update` function earlier. Here we have specified a learning rate schedule that is 0.05 for the first 300 minibatches (3 times the epoch size), then drops to 0.025 for the next 200 minibatches, and it is 0.0125 from then on until the end of training. This kind of functionality is quite common in tuning neural networks and it is the reason why in some papers (such as the [ResNet paper](https://arxiv.org/abs/1512.03385)) we see learning curves like thisWhat is happening in this paper is that the learning rate gets reduced by a factor of 0.1 after 150000 and 300000 updates (cf. section 3.4 of the paper). In the example above the learning drops by a factor of 0.5 between each phase. Right now there is no good guidance on how to choose this factor, but it's typically between 0.1 and 0.9.Apart from specifying a `Trainer` yourself, it is also possible to use the `cntk.Function.train` convenience method. This allows you to specify the learner and the data and it internally creates a trainer that drives the training loop. Other built-in learnersApart from SGD, other built-in learners include - [SGD with momentum](https://cntk.ai/pythondocs/cntk.learners.htmlcntk.learners.momentum_sgd) (`momentum_sgd`)- [SGD with Nesterov momentum](https://cntk.ai/pythondocs/cntk.learners.htmlcntk.learners.nesterov) (`nesterov`) first popularized in deep learning by [this paper](http://proceedings.mlr.press/v28/sutskever13.html)- [Adagrad](https://cntk.ai/pythondocs/cntk.learners.htmlcntk.learners.adagrad) (`adagrad`) first popularized in deep learning by [this paper](https://research.google.com/archive/large_deep_networks_nips2012.html) - [RMSProp](https://cntk.ai/pythondocs/cntk.learners.htmlcntk.learners.rmsprop) (`rmsprop`) a correction to adagrad that prevents the learning rate from decaying too fast.- [FSAdagrad](https://cntk.ai/pythondocs/cntk.learners.htmlcntk.learners.fsadagrad) (`fsadagrad`) adds momentum and bias correction to RMSprop- [Adam / Adamax](https://cntk.ai/pythondocs/cntk.learners.htmlcntk.learners.adam) (`adam(..., adamax=False/True)`) see [this paper](https://arxiv.org/abs/1412.6980)- [Adadelta](https://cntk.ai/pythondocs/cntk.learners.htmlcntk.learners.adadelta) (`adadelta`) see [this paper](https://arxiv.org/abs/1212.5701) MomentumAmong these learners, `momentum_sgd`, `nesterov`, `fsadagrad`, and `adam` take an additional momentum schedule. When using momentum, instead of updating the parameter using the current gradient, we update the parameter using all previous gradients exponentially decayed. If there is a consistent direction that the gradients are pointing to, the parameter updates will develop momentum in that direction. [This page](http://distill.pub/2017/momentum/) has a good explanation of momentum.Like the learning rate schedule, the momentum schedule can be specified in two equivalent ways:- `momentum_schedule(float or list of floats, epoch_size)`- `momentum_as_time_constant(float or list of floats, epoch_size)`As with `learning_rate_schedule`, the arguments are interpreted in the same way, i.e. there's flexibility in specifying different momentum for the first few minibatches and for later minibatches. The difference between the two calls is just a simple transformation as explained in the following. Since momentum is creating a sort of exponential moving average it is fair to ask "when does the contribution of an old gradient diminish by a certain constant factor?". If we choose the constant factor to be $0.5$ we call this the [half-life](https://en.wikipedia.org/wiki/Half-life) and if we choose the constant to be $e^{-1}\approx 0.368$ we call this the [time constant](https://en.wikipedia.org/wiki/Time_constant). So `momentum_as_time_constant_schedule` specifies the number of samples it would take for the gradient of each minibatch to decay to $0.368$ of its original contribution on the momentum term. Specifying a `momentum_as_time_constant_schedule(300)` and a minibatch size of 10 is a little bit more meaningful than specifying `momentum_schedule(.967...)` even though both lead to the same updates. The way to convert between the two schedules is- $\textrm{momentum} = \exp(-\frac{\textrm{minibatch_size}}{\textrm{time_constant}})$- $\textrm{time_constant} = \frac{\textrm{minibatch_size}}{\log(1/\textrm{momentum})}$
###Code
mb_size = 10
time_constant = 300
momentum = math.exp(-mb_size/time_constant)
print('time constant for momentum of 0.967... = ', mb_size/math.log(1/momentum))
print('momentum for time constant of 300 = ', math.exp(-mb_size/time_constant))
###Output
time constant for momentum of 0.967... = 300.00000000000006
momentum for time constant of 300 = 0.9672161004820059
###Markdown
Apart from the momentum schedule, the momentum learners can also take a boolean "unit_gain" argument that determines the form of the momentum update:- `unit_gain=True`: $\textrm{momentum_direction} = \textrm{momentum} \cdot \textrm{old_momentum_direction} + (1 - \textrm{momentum}) \cdot \textrm{gradient}$- `unit_gain=False`: $\textrm{momentum_direction} = \textrm{momentum} \cdot \textrm{old_momentum_direction} + \textrm{gradient}$The idea behind the non-conventional `unit_gain=True` is that when momentum and/or learning rate changes, this way of updating does not lead to divergence. In general, users should exercise great caution when switching learning rate and/or momentum with `unit_gain=False`. One piece of relevant advice is Remark 2 in [this paper](https://arxiv.org/abs/1706.02677) which shows how to adjust your momentum when the learning rate changes in the `unit_gain=False` case.The following code illustrates that, for the case of `unit_gain=False`, the two ways of specifying momentum (as time constant or not) are equivalent. It also shows that when `unit_gain=True` you need to scale your learning rate by $1/(1-\textrm{momentum})$ to match the `unit_gain=False` case
###Code
lr_schedule = C.learning_rate_schedule(1, C.UnitType.minibatch)
ug_schedule = C.learning_rate_schedule(1/(1-momentum), C.UnitType.minibatch)
m_schedule = C.momentum_schedule(momentum)
t_schedule = C.momentum_as_time_constant_schedule(time_constant)
msgd = C.momentum_sgd(z.parameters, lr_schedule, m_schedule, unit_gain=False)
tsgd = C.momentum_sgd(z.parameters, lr_schedule, t_schedule, unit_gain=False)
usgd = C.momentum_sgd(z.parameters, ug_schedule, m_schedule, unit_gain=True)
print(inspect_update(msgd, mb_size, 5)[0][0])
print(inspect_update(tsgd, mb_size, 5)[0][0])
print(inspect_update(usgd, mb_size, 5)[0][0])
###Output
[-1.436 -1.436]
[-1.436 -1.436]
[-1.436 -1.436]
###Markdown
Learners with individual learning ratesAmong the built-in learners, `adagrad`, `rmsprop`, `fsadagrad`, `adam`, and `adadelta` have rules for tuning the learning rate of each parameter individually. They still require the tuning of a global learning rate that gets multiplied with the individual learning rate of each parameter. At the heart of these techniques is basically the idea that we can perform sgd on each parameter separately. This can be useful if some features appear less often than others and therefore different features are updated at different frequencies. With a single learning rate we run the risk of decaying it a lot before we see a rare feature (e.g. a rate word). Instead we might want the updates to depend on how often those features have been seen rather than how many minibatches have been processed. These methods are typically easier to tune, but there is some new evidence that [they overfit more easily](https://arxiv.org/abs/1705.08292) than SGD with momentum.Below, we show how these learners can be configured and how their updates affect the model parameters. The main take-away is that **if you switch learners, you need to retune the learning rate**. In this example the initial points and gradients are the same yet different learners arrive at different parameter values after 10 minibatches. Since the gradients are always 1, it is fair to say that in this case the learner with the most negative parameter value is the best. However, if we retune the learning rates, the learner with the least negative parameter value (adadelta), we can drive its parameters to similar values as the one with the most negative parameter value (adamax). Also, this is an artificial example where gradients are consistently equal to 1, so the methods that have momemtum built-in (`adam`/`adamax`/`fsadagrad`) should be better than the methods that don't have built-in momentum (for the same value of the learning rate).
###Code
mb_size = 32
time_constant = 1000
lr_schedule = C.learning_rate_schedule(1, C.UnitType.minibatch)
t_schedule = C.momentum_as_time_constant_schedule(time_constant)
tsgd = C.momentum_sgd(z.parameters, lr_schedule, t_schedule, unit_gain=False)
adadelta = C.adadelta(z.parameters, lr_schedule, 0.999, 1e-6)
adagrad = C.adagrad(z.parameters, lr_schedule)
adam = C.adam(z.parameters, lr_schedule, t_schedule, unit_gain=False)
adamax = C.adam(z.parameters, lr_schedule, t_schedule, unit_gain=False, adamax=True)
fsadagrad = C.fsadagrad(z.parameters, lr_schedule, t_schedule, unit_gain=False)
rmsprop = C.rmsprop(z.parameters, lr_schedule, gamma=0.999, inc=1.0+1e-9, dec=1.0-1e-9, max=np.inf, min=1e-30)
print('adadelta :', inspect_update(adadelta, mb_size, 10)[0][0])
print('adagrad :', inspect_update(adagrad, mb_size, 10)[0][0])
print('adam :', inspect_update(adam, mb_size, 10)[0][0])
print('adamax :', inspect_update(adamax, mb_size, 10)[0][0])
print('fsadagrad:', inspect_update(fsadagrad, mb_size, 10)[0][0])
print('rmsprop :', inspect_update(rmsprop, mb_size, 10)[0][0])
adadelta_schedule = C.learning_rate_schedule(1004, C.UnitType.minibatch)
adadelta_tuned = C.adadelta(z.parameters, adadelta_schedule, 0.999, 1e-6)
print('adadelta2:', inspect_update(adadelta_tuned, mb_size, 10)[0][0])
###Output
adadelta : [-0.0099 -0.0099]
adagrad : [-0.3125 -0.3125]
adam : [-9.9203 -9.9203]
adamax : [-9.9227 -9.9227]
fsadagrad: [-8.8573 -8.8573]
rmsprop : [-0.3125 -0.3125]
adadelta2: [-9.9228 -9.9228]
###Markdown
Writing a learner as a CNTK expressionIf you want to experiment with your own learner, you should first try to write it as a CNTK expression. This is much faster than the next alternative, which is to write it in Python. CNTK has a **universal learner** that accepts a function as an argument. This function takes a list of parameters and gradients and creates an expression (a network) that, when evaluated, will assign new values to the parameters according to the learning rule you coded. At the time of this writing, the universal learner does not support schedules for learning rate and momentum. If this is necessary, the user must create a new learner. Another shortcoming of this learner is it only supports densely stored gradients. If you get an error that a quantity is not dense, you have two options:- Replace input variables that are sparse with dense (is_sparse=False)- Find the parameters with sparse gradients (typically those used at the very first layer) and use a built-in learner for those parametersWe are working to lift this requirement. Below we show how to write RMSprop using the universal learner.
###Code
def my_rmsprop(parameters, gradients):
rho = 0.999
lr = 0.01
# We use the following accumulator to store the moving average of every squared gradient
accumulators = [C.constant(1e-6, shape=p.shape, dtype=p.dtype) for p in parameters]
update_funcs = []
for p, g, a in zip(parameters, gradients, accumulators):
# We declare that `a` will be replaced by an exponential moving average of squared gradients
# The return value is the expression rho * a + (1-rho) * g * g
accum_new = C.assign(a, rho * a + (1-rho) * g * g)
# This is the rmsprop update.
# We need to use accum_new to create a dependency on the assign statement above.
# This way, when we run this network both assigns happen.
update_funcs.append(C.assign(p, p - lr * g / C.sqrt(accum_new)))
return C.combine(update_funcs)
my_learner = C.universal(my_rmsprop, z.parameters)
print(inspect_update(my_learner, 10, 2)[0][0])
###Output
[-0.5397 -0.5397]
###Markdown
Writing a learner as a Python classCNTK expressions are very powerful and all the well-known learners can be expressed in this way. Still, there can be rare cases where you want to perform an update that cannot be currently implemented as a CNTK expression. In those cases you can implement your learner as a Python class. CNTK will then call its update method during training. Since this means the training loop (C++ code) is calling into Python (your learner) for every single minibatch, this approach is the slowest of all options.In order for your class to be understood as a learner, it has to inherit from `cntk.UserLearner`. The constructor can be used to set up the learner. The trainer will call the learner's `update` method by supplying it a dictionary, whose keys are the parameters and whose values are the corresponding gradients, as well as the number of samples in the minibatch and whether we have reached the end of a sweep through the data. The implementation of `update` is totally up to you.In the code below, we create a learner that just performs SGD. In the constructor we create a dictionary mapping tensor shapes to CNTK expressions with the gradients being input variables. In the `update` method, for each parameter-gradient pair we look up the expression corresponding to the shape of the parameter, bind the gradient to the input of the expression and evaluate the expression. Finally, we slice the result to get rid of the batch axis and update the parameter. We have also slightly modified the `inspect_update` method to make it work with a user defined learner.
###Code
class MySgd(C.UserLearner):
def __init__(self, parameters, lr_schedule):
super(MySgd, self).__init__(parameters, lr_schedule, as_numpy=False)
self.new_parameter = {}
self.grad_input = {}
self.sample_count_input = C.input_variable((), name='count')
lr = lr_schedule[0] # assuming constant learning rate
eta = lr / self.sample_count_input
# we need one graph per parameter shape
for param in parameters:
p_shape = param.shape
self.grad_input[p_shape] = C.input_variable(p_shape)
self.new_parameter[p_shape] = param - eta * self.grad_input[p_shape]
def update(self, gradient_values, training_sample_count, sweep_end):
for p, g in gradient_values.items():
new_p = self.new_parameter[p.shape]
grad_input = self.grad_input[p.shape]
data = {
self.sample_count_input: np.asarray(training_sample_count),
grad_input: g
}
result = new_p.eval(data, as_numpy=False)
shape = result.shape
# result has the shape of a complete minibatch, but contains
# only one tensor, which we want to write to p. This means, we
# have to slice off the leading dynamic axis.
static_tensor = result.data.slice_view([0]*len(shape), shape[1:])
p.set_value(static_tensor)
return True
mb_size = 64
lr_schedule = C.learning_rate_schedule(1, C.UnitType.minibatch)
my_sgd = MySgd(z.parameters, lr_schedule)
def inspect_user_learner_update(learner, mbsize, count):
# user defined learner parameters are of type C.cntk_py.Parameter which is not nice to work with
# we copy them out to easy_parameters and update their __class__ attribute to be C.Parameter
easy_parameters = [p for p in learner.parameters()]
for p in easy_parameters:
p.__class__ = C.Parameter
old_values = [p.value for p in easy_parameters]
for p in easy_parameters:
p.value = 0 * p.value
updates = {p: p.value + 1 for p in easy_parameters}
for i in range(count):
learner.update(updates, np.float32(mbsize), sweep_end=False)
ret_values = [p.value for p in easy_parameters]
for p, o in zip(easy_parameters, old_values):
p.value = o
return ret_values
print(inspect_user_learner_update(my_sgd, mb_size, 10)[0][0])
###Output
[-0.1562 -0.1562]
|
docs/transformers.ipynb | ###Markdown
Circuit Transformers View on QuantumAI Run in Google Colab View source on GitHub Download notebook Setup
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
import cirq
print("installed cirq.")
###Output
_____no_output_____
###Markdown
What is a Transformer?A transformer in Cirq is any callable that satisfies the `cirq.TRANSFORMER` API, and *transforms* an input circuit into an output circuit.Circuit transformations are often necessary to compile a user-defined circuit to an equivalent circuit that satisfies the constraints necessary to be executable on a specific device or simulator. The compilation process often involves steps like:- Gate Decompositions: Rewrite the circuit using only gates that belong to the device target gateset, i.e. set of gates which the device can execute. - Qubit Mapping and Routing: Map the logic qubits in the input circuit to physical qubits on the device and insert appropriate swap operations such that final circuit respects the hardware topology. - Circuit Optimizations: Perform hardware specific optimizations, like merging and replacing connected components of 1 and 2 operations with more efficient rewrite operations, commuting Z gates through the circuit, aligning gates in moments and more.Cirq provides many out-of-the-box transformers which can be used as individual compilation passes. It also supplies a general framework for users to create their own transformers, by using powerful primitives and by bundling existing transformers together, to enable the compilation of circuits for specific targets. This page covers the available transformers in Cirq, how to use them, and how to write a simple transformer. The [Custom Transformers](/cirq/transform/transformers) page presents the details on creating more complex custom transformers through primitives and composition. Built-in Transformers in Cirq OverviewTransformers that come with cirq can be found in the `cirq.transformers` package.A few notable examples are:* **`cirq.align_left` / `cirq.align_right`**: Align gates to the left/right of the circuit by sliding them as far as possible along each qubit in the chosen direction.* **`cirq.defer_measurements`**: Moves all (non-terminal) measurements in a circuit to the end of circuit by implementing the deferred measurement principle.* **`cirq.drop_empty_moments`** / **`cirq.drop_negligible_operations`**: Removes moments that are empty or operations that have very small effects, respectively.* **`cirq.eject_phased_paulis`**: Pushes X, Y, and PhasedX gates towards the end of the circuit, potentially absorbing Z gates and modifying gates along the way.* **`cirq.eject_z`**: Pushes Z gates towards the end of the circuit, potentially adjusting phases of gates that they pass through.* **`cirq.expand_composite`**: Uses `cirq.decompose` to expand gates built from other gates (composite gates).* **`cirq.merge_k_qubit_unitaries`**: Replaces connected components of unitary operations, acting on <= k qubits, with op-tree given by `rewriter(circuit_op)`.* **`cirq.optimize_for_target_gateset`**: Attempts to convert a circuit into an equivalent circuit using only gates from a given target gateset.* **`cirq.stratified_circuit`**: Repacks the circuit to ensure that moments only contain operations from the same category.* **`cirq.synchronize_terminal_measurements`**: Moves all terminal measurements in a circuit to the final moment, if possible.Below you can see how to implement a transformer pipeline as a function called `optimize_circuit`, which composes a few of the available Cirq transformers.
###Code
def optimize_circuit(circuit, context=None, k=2):
# Merge 2-qubit connected components into circuit operations.
optimized_circuit = cirq.merge_k_qubit_unitaries(
circuit, k=k, rewriter=lambda op: op.with_tags("merged"), context=context
)
# Drop operations with negligible effect / close to identity.
optimized_circuit = cirq.drop_negligible_operations(optimized_circuit, context=context)
# Expand all remaining merged connected components.
optimized_circuit = cirq.expand_composite(
optimized_circuit, no_decomp=lambda op: "merged" not in op.tags, context=context
)
# Synchronize terminal measurements to be in the same moment.
optimized_circuit = cirq.synchronize_terminal_measurements(optimized_circuit, context=context)
# Assert the original and optimized circuit are equivalent.
cirq.testing.assert_circuits_with_terminal_measurements_are_equivalent(
circuit, optimized_circuit
)
return optimized_circuit
q = cirq.LineQubit.range(3)
circuit = cirq.Circuit(
cirq.H(q[1]),
cirq.CNOT(*q[1:]),
cirq.H(q[0]),
cirq.CNOT(*q[:2]),
cirq.H(q[1]),
cirq.CZ(*q[:2]),
cirq.H.on_each(*q[:2]),
cirq.CNOT(q[2], q[0]),
cirq.measure_each(*q),
)
print("Original Circuit:", circuit, sep="\n")
print("Optimized Circuit:", optimize_circuit(circuit), sep="\n")
###Output
_____no_output_____
###Markdown
Inspecting transformer actionsEvery transformer in Cirq accepts a `cirq.TransformerContext` instance, which stores common configurable options useful for all transformers. One of the members of transformer context dataclass is `cirq.TransformerLogger` instance. When a logger instance is specified, every cirq transformer logs its action on the input circuit using the given logger instance. The logs can then be inspected to understand the action of each individual transformer on the circuit.Below, you can inspect the action of each transformer in the `optimize_circuit` method defined above.
###Code
context = cirq.TransformerContext(logger=cirq.TransformerLogger())
optimized_circuit = optimize_circuit(circuit, context)
context.logger.show()
###Output
_____no_output_____
###Markdown
By first using `cirq.merge_k_qubit_unitaries` to turn connected components of the circuit into `cirq.CircuitOperation`s, `cirq.drop_negligible_operations` was able to identify that one of the merged connected components was equivalent to the identity operation and remove it. The remaining steps returned the circuit to a more typical state, expanding intermediate `cirq.CircuitOperation`s and aligning measurements to be terminal measurements. Support for no-compile tagsCirq also supports tagging operations with no-compile tags such that these tagged operations are ignored when applying transformations on the circuit. This allows users to gain more fine-grained conrol over the compilation process. Any valid tag can be used as a "no-compile" tag by adding it to the `tags_to_ignore` field in `cirq.TransformerContext`. When called with a context, cirq transformers will inspect the `context.tags_to_ignore` field and ignore an operation if `op.tags & context.tags_to_ignore` is not empty. Below, you can use no-compile tags when transforming a circuit using the `optimize_circuit` mehod defined above.
###Code
# Echo pulses inserted in the circuit to prevent dephasing during idling should be ignored.
circuit = cirq.Circuit(
cirq.H(q[0]),
cirq.CNOT(*q[:2]),
[
op.with_tags("spin_echoes") for op in [cirq.X(q[0]) ** 0.5, cirq.X(q[0]) ** -0.5]
], # the echo pulses
[cirq.CNOT(*q[1:]), cirq.CNOT(*q[1:])],
[cirq.CNOT(*q[:2]), cirq.H(q[0])],
cirq.measure_each(*q),
)
# Original Circuit
print("Original Circuit:", circuit, "\n", sep="\n")
# Optimized Circuit without tags_to_ignore
print("Optimized Circuit without specifying tags_to_ignore:")
print(optimize_circuit(circuit, k=1), "\n")
# Optimized Circuit ignoring operations marked with tags_to_ignore.
print("Optimized Circuit while ignoring operations marked with tags_to_ignore:")
context = cirq.TransformerContext(tags_to_ignore=["spin_echoes"])
print(optimize_circuit(circuit, k=1, context=context), "\n")
###Output
_____no_output_____
###Markdown
Support for recursively transforming sub-circuitsBy default, an operation `op` of type `cirq.CircuitOperation` is considered as a single top-level operation by cirq transformers. As a result, the sub-circuits wrapped inside circuit operations will often be left as it is and a transformer will only modify the top-level circuit. If you wish to recursively run a transformer on every nested sub-circuit wrapped inside a `cirq.CircuitOperation`, you can set `context.deep=True` in the `cirq.TransformerContext` object. Note that tagged circuit operations marked with any of `context.tags_to_ignore` will be ignored even if `context.deep is True`. See the example below for a better understanding.
###Code
q = cirq.LineQubit.range(2)
circuit_op = cirq.CircuitOperation(
cirq.FrozenCircuit(cirq.I.on_each(*q), cirq.CNOT(*q), cirq.I(q[0]).with_tags("ignore"))
)
circuit = cirq.Circuit(
cirq.I(q[0]), cirq.I(q[1]).with_tags("ignore"), circuit_op, circuit_op.with_tags("ignore")
)
print("Original Circuit:", circuit, "\n", sep="\n\n")
context = cirq.TransformerContext(tags_to_ignore=["ignore"], deep=False)
print("Optimized Circuit with deep=False and tags_to_ignore=['ignore']:\n")
print(cirq.drop_negligible_operations(circuit, context=context), "\n\n")
context = cirq.TransformerContext(tags_to_ignore=["ignore"], deep=True)
print("Optimized Circuit with deep=True and tags_to_ignore=['ignore']:\n")
print(cirq.drop_negligible_operations(circuit, context=context), "\n")
###Output
_____no_output_____
###Markdown
The leading identity gate that wasn't tagged was removed from both optimized circuits, but the identity gates within each `cirq.CircuitOperation` were removed if `deep = true` and the `CircuitOperation` wasn't tagged and the identity operation wasn't tagged. Compiling to NISQ targets: `cirq.CompilationTargetGateset`Cirq's philosophy on compiling circuits for execution on a NISQ target device or simulator is that it would often require running only a handful of individual compilation passes on the input circuit, one after the other.**`cirq.CompilationTargetGateset`** is an abstraction in Cirq to represent such compilation targets as well as the bundles of transformer passes which should be executed to compile a circuit to this target. Cirq has implementations for common target gatesets like `cirq.CZTargetGateset`, `cirq.SqrtIswapTargetGateset` etc.**`cirq.optimize_for_target_gateset`** is a transformer which compiles a given circuit for a `cirq.CompilationTargetGateset` via the following steps:1. Run all `gateset.preprocess_transformers`2. Convert operations using built-in `cirq.decompose` + `gateset.decompose_to_target_gateset`.3. Run all `gateset.postprocess_transformers`The preprocess transformers often includes optimizations like merging connected components of 1/2 qubit unitaries into a single unitary matrix, which can then be replaced with an efficient analytical decomposition as part of step-2. The post-process transformers often includes cleanups and optimizations like dropping negligible operations, converting single qubit rotations into desired form, circuit alignments etc.
###Code
# Original QFT Circuit on 3 qubits.
q = cirq.LineQubit.range(3)
circuit = cirq.Circuit(cirq.QuantumFourierTransformGate(3).on(*q), cirq.measure(*q))
print("Original Circuit:", circuit, "\n", sep="\n")
# Compile the circuit for CZ Target Gateset.
gateset = cirq.CZTargetGateset(allow_partial_czs=True)
cz_circuit = cirq.optimize_for_target_gateset(circuit, gateset=gateset)
cirq.testing.assert_circuits_with_terminal_measurements_are_equivalent(circuit, cz_circuit)
print("Circuit compiled for CZ Target Gateset:", cz_circuit, "\n", sep="\n")
###Output
_____no_output_____
###Markdown
`cirq.optimize_for_target_gateset` also supports all the features discussed above, using `cirq.TransformerContext`. For example, you can compile the circuit for sqrt-iswap target gateset and inspect action of individual transformers using `cirq.TransformerLogger`, as shown below.
###Code
context = cirq.TransformerContext(logger=cirq.TransformerLogger())
gateset = cirq.SqrtIswapTargetGateset()
sqrt_iswap_circuit = cirq.optimize_for_target_gateset(circuit, gateset=gateset, context=context)
cirq.testing.assert_circuits_with_terminal_measurements_are_equivalent(circuit, sqrt_iswap_circuit)
context.logger.show()
###Output
_____no_output_____
###Markdown
Installing Cirq View on QuantumAI Run in Google Colab View source on GitHub Download notebook
###Code
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
import cirq
print("installed cirq.")
###Output
_____no_output_____
###Markdown
What is a Transformer?A transformer in Cirq is any callable, that satisfies the `cirq.TRANSFORMER` API, and *transforms* an input circuit into an output circuit.Circuit transformations are often necessary to compile a user-defined circuit to an equivalent circuit, which can be executed on a specific device or simulator. The compilation process often involves steps like:- Gate Decompositions: Rewrite the circuit using only gates that belong to the device target gateset, i.e. set of gates which the device can execute. - Qubit Mapping and Routing: Map the logic qubits in the input circuit to physical qubits on the device and insert appropriate swap operations such that final circuit respects the hardware topology. - Circuit Optimizations: Perform hardware specific optimizations, like merging and replacing connected components of 1 and 2 operations with more efficient rewrites, commuting Z gates through the circuit, aligning gates in moments etc.Cirq provides many out-of-the-box transformers which can be used as individual compilation passes and also provides a general framework for users to create their own transformers using powerful primitives and bundle a bunch of transformers together to enable compiling circuits for specific targets. Built-in Transformers in Cirq OverviewTransformers that come with cirq can be found in the `cirq.transformers` package.A few notable examples are:* **`cirq.align_left` / `cirq.align_right`**: Align gates to the left/right of the circuit.* **`cirq.defer_measurements`**: Moves all (non-terminal) measurements in a circuit to the end of circuit by implementing the deferred measurement principle.* **`cirq.drop_empty_moments`** / **`cirq.drop_negligible_operations`**: Removes moments that are empty or operations that have very small effects, respectively.* **`cirq.eject_phased_paulis`**: Pushes X, Y, and PhasedX gates towards the end of the circuit, potentially absorbing Z gates and modifying gates along the way.* **`cirq.eject_z`**: Pushes Z gates towards the end of the circuit, potentially adjusting phases of gates that they pass through.* **`cirq.expand_composite`**: Uses `cirq.decompose` to expand composite gates.* **`cirq.merge_k_qubit_unitaries`**: Replaces connected components of unitary operations, acting on <= k qubits, with op-tree given by `rewriter(circuit_op)`.* **`cirq.optimize_for_target_gateset`**: Attempts to convert a circuit into an equivalent circuit using only gates from a given target gateset.* **`cirq.stratified_circuit`**: Repacks the circuit to ensure that moments only contain operations from the same category.* **`cirq.synchronize_terminal_measurements`**: Moves all terminal measurements in a circuit to the final moment, if possible.Below you can see how to implement a transformer pipeline called `optimize_circuit`.
###Code
def optimize_circuit(circuit, context = None, k=2):
# Merge 2-qubit connected components into circuit operations.
optimized_circuit = cirq.merge_k_qubit_unitaries(circuit, k=k, rewriter=lambda op: op.with_tags("merged"), context=context)
# Drop operations with negligible effect / close to identity.
optimized_circuit = cirq.drop_negligible_operations(optimized_circuit, context=context)
# Expand all remaining merged connected components.
optimized_circuit = cirq.expand_composite(optimized_circuit, no_decomp=lambda op: "merged" not in op.tags, context=context)
# Synchronize terminal measurements to be in the same moment.
optimized_circuit = cirq.synchronize_terminal_measurements(optimized_circuit, context=context)
# Assert the original and optimized circuit are equivalent.
cirq.testing.assert_circuits_with_terminal_measurements_are_equivalent(circuit, optimized_circuit)
return optimized_circuit
q = cirq.LineQubit.range(3)
circuit = cirq.Circuit(
cirq.H(q[1]), cirq.CNOT(*q[1:]),
cirq.H(q[0]), cirq.CNOT(*q[:2]), cirq.H(q[1]),
cirq.CZ(*q[:2]), cirq.H.on_each(*q[:2]),
cirq.CNOT(q[2], q[0]),
cirq.measure_each(*q)
)
print("Original Circuit:", circuit, sep="\n")
print("Optimized Circuit:", optimize_circuit(circuit), sep="\n")
###Output
_____no_output_____
###Markdown
Inspecting transformer actionsEvery transformer in Cirq accepts a `cirq.TransformerContext` instance, which stores common configurable options useful for all transformers. One of the members of transformer context dataclass is `cirq.TransformerLogger` instance. When a logger instance is specified, every cirq transformer logs its action on the input circuit using the given logger instance. The logs can then be inspected to understand the action of each individual transformer on the circuit.Below, you can inspect the action of each transformer in the `optimize_circuit` method defined above.
###Code
context = cirq.TransformerContext(logger=cirq.TransformerLogger())
optimized_circuit = optimize_circuit(circuit, context)
context.logger.show()
###Output
_____no_output_____
###Markdown
Support for no-compile tagsCirq also supports tagging operations with no-compile tags such that these tagged operations are ignored when applying transformations on the circuit. This allows users to gain more fine-grained conrol over the compilation process. Any valid tag can be used as a "no-compile" tag by adding it to the `tags_to_ignore` field in `cirq.TransformerContext`. When called with a context, cirq transformers will inspect the `context.tags_to_ignore` field and ignore an operation if `op.tags & context.tags_to_ignore` is not empty. Below, you can use no-compile tags when transforming a circuit using the `optimize_circuit` mehod defined above.
###Code
# Echo pulses inserted in the circuit to prevent dephasing during idling should be ignored.
circuit = cirq.Circuit(
cirq.H(q[0]), cirq.CNOT(*q[:2]),
[op.with_tags("spin_echoes") for op in [cirq.X(q[0]) ** 0.5, cirq.X(q[0])** -0.5]],
[cirq.CNOT(*q[1:]), cirq.CNOT(*q[1:])],
[cirq.CNOT(*q[:2]), cirq.H(q[0])],
cirq.measure_each(*q),
)
# Original Circuit
print("Original Circuit:", circuit, "\n", sep="\n")
# Optimized Circuit without tags_to_ignore
print("Optimized Circuit without specifying tags_to_ignore:")
print(optimize_circuit(circuit, k=1), "\n")
# Optimized Circuit ignoring operations marked with tags_to_ignore.
print("Optimized Circuit while ignoring operations marked with tags_to_ignore:")
context=cirq.TransformerContext(tags_to_ignore=["spin_echoes"])
print(optimize_circuit(circuit, k=1, context=context), "\n")
###Output
_____no_output_____
###Markdown
Support for recursively transforming sub-circuitsBy default, an operation `op` of type `cirq.CircuitOperation` is considered as a single top-level operation by cirq transformers. As a result, the sub-circuits wrapped inside circuit operations will often be left as it is and a transformer will only modify the top-level circuit. If you wish to recursively run a transformer on every nested sub-circuit wrapped inside a `cirq.CircuitOperation`, you can set `context.deep=True` in the `cirq.TransformerContext` object. Note that tagged circuit operations marked with any of `context.tags_to_ignore` will be ignored even if `context.deep is True`. See the example below for a better understanding.
###Code
q = cirq.LineQubit.range(2)
circuit_op = cirq.CircuitOperation(
cirq.FrozenCircuit(
cirq.I.on_each(*q),
cirq.CNOT(*q),
cirq.I(q[0]).with_tags("ignore"),
)
)
circuit = cirq.Circuit(
cirq.I(q[0]), cirq.I(q[1]).with_tags("ignore"),
circuit_op,
circuit_op.with_tags("ignore"),
)
print("Original Circuit:", circuit, "\n", sep="\n\n")
context = cirq.TransformerContext(tags_to_ignore=["ignore"], deep=True)
print("Optimized Circuit with deep=True and tags_to_ignore=['ignore']:\n")
print(cirq.drop_negligible_operations(circuit, context=context), "\n")
###Output
_____no_output_____
###Markdown
Compiling to NISQ targets: `cirq.CompilationTargetGateset`Cirq's philosophy on compiling circuits for execution on a NISQ target device or simulator is that it would often require running only a handful of individual compilation passes on the input circuit, one after the other.**`cirq.CompilationTargetGateset`** is an abstraction in Cirq to represent such compilation targets as well as the bundles of transformer passes which should be executed to compile a circuit to this target. Cirq has implementations for common target gatesets like `cirq.CZTargetGateset`, `cirq.SqrtIswapTargetGateset` etc.**`cirq.optimize_for_target_gateset`** is a transformer which compiles a given circuit for a `cirq.CompilationTargetGateset` via the following steps:1. Run all `gateset.preprocess_transformers`2. Convert operations using built-in `cirq.decompose` + `gateset.decompose_to_target_gateset`.3. Run all `gateset.postprocess_transformers`The preprocess transformers often includes optimizations like merging connected components of 1/2 qubit unitaries into a single unitary matrix, which can then be replaced with an efficient analytical decomposition as part of step-2. The post-process transformers often includes cleanups and optimizations like dropping negligible operations, converting single qubit rotations into desired form, circuit alignments etc.
###Code
# Original QFT Circuit on 3 qubits.
q = cirq.LineQubit.range(3)
circuit = cirq.Circuit(cirq.QuantumFourierTransformGate(3).on(*q), cirq.measure(*q))
print("Original Circuit:", circuit, "\n", sep="\n")
# Compile the circuit for CZ Target Gateset.
gateset = cirq.CZTargetGateset(allow_partial_czs=True)
cz_circuit = cirq.optimize_for_target_gateset(circuit, gateset = gateset)
cirq.testing.assert_circuits_with_terminal_measurements_are_equivalent(circuit, cz_circuit)
print("Circuit compiled for CZ Target Gateset:", cz_circuit, "\n", sep="\n")
###Output
_____no_output_____
###Markdown
`cirq.optimize_for_target_gateset` also supports all the features discussed above, using `cirq.TransformerContext`. For example, you can compile the circuit for sqrt-iswap target gateset and inspect action of individual transformers using `cirq.TransformerLogger`, as shown below.
###Code
context = cirq.TransformerContext(logger=cirq.TransformerLogger())
gateset = cirq.SqrtIswapTargetGateset()
sqrt_iswap_circuit = cirq.optimize_for_target_gateset(circuit, gateset = gateset, context=context)
cirq.testing.assert_circuits_with_terminal_measurements_are_equivalent(circuit, sqrt_iswap_circuit)
context.logger.show()
###Output
_____no_output_____
###Markdown
Building custom transformers `cirq.TRANSFORMER` API and `@cirq.transformer` decoratorAny callable that satisfies the `cirq.TRANSFORMER` contract, i.e. takes a `cirq.AbstractCircuit` and `cirq.TransformerContext` and returns a transformed `cirq.AbstractCircuit`, is a valid transformer in Cirq. You can create a custom transformer by simply decorating a class/method, that satisfies the above contract, with `@cirq.transformer` decorator.
###Code
@cirq.transformer
def reverse_circuit(circuit, *, context = None):
"""Transformer to reverse the input circuit."""
return circuit[::-1]
@cirq.transformer
class SubstituteGate:
"""Transformer to substitute `source` gates with `target` in the input circuit."""
def __init__(self, source, target):
self._source = source
self._target = target
def __call__(self, circuit, *, context = None):
batch_replace = []
for i, op in circuit.findall_operations(lambda op: op.gate == self._source):
batch_replace.append((i, op, self._target.on(*op.qubits)))
transformed_circuit = circuit.unfreeze(copy=True)
transformed_circuit.batch_replace(batch_replace)
return transformed_circuit
q = cirq.NamedQubit("q")
circuit = cirq.Circuit(cirq.X(q), cirq.CircuitOperation(cirq.FrozenCircuit(cirq.X(q), cirq.Y(q))), cirq.Z(q))
substitute_gate = SubstituteGate(cirq.X, cirq.S)
print("Original Circuit:", circuit, "\n", sep="\n")
print("Reversed Circuit:", reverse_circuit(circuit), "\n", sep="\n")
print("Substituted Circuit:", substitute_gate(circuit), sep="\n")
###Output
_____no_output_____
###Markdown
`cirq.TransformerContext` to store common configurable options`cirq.TransformerContext` is a dataclass that stores common configurable options for all transformers. All cirq transformers should accept the transformer context as an optional keyword argument. The `@cirq.transformer` decorator can inspect the `cirq.TransformerContext` argument and automatically append useful functionality, like support for automated logging and recursively running the transformer on nested sub-circuits. `cirq.TransformerLogger` and support for automated loggingThe `cirq.TransformerLogger` class is used to log the actions of a transformer on an input circuit. `@cirq.transformer` decorator automatically adds support for logging the initial and final circuits for each transfomer step.
###Code
context = cirq.TransformerContext(logger=cirq.TransformerLogger())
transformed_circuit = reverse_circuit(circuit, context=context)
transformed_circuit = substitute_gate(transformed_circuit, context=context)
context.logger.show()
###Output
_____no_output_____
###Markdown
Support for `deep=True`You can call `@cirq.transformer(add_deep_support=True)` to automatically add the functionality of recursively running the custom transformer on circuits wrapped inside `cirq.CircuitOperation`. The recursive execution behavior of the transformer can then be controlled by setting `deep=True` in the transformer context.
###Code
@cirq.transformer(add_deep_support=True)
def reverse_circuit_deep(circuit, *, context = None):
"""Transformer to reverse the input circuit."""
return circuit[::-1]
@cirq.transformer(add_deep_support=True)
class SubstituteGateDeep(SubstituteGate):
"""Transformer to substitute `source` gates with `target` in the input circuit."""
pass
context = cirq.TransformerContext(deep=True)
substitute_gate_deep = SubstituteGateDeep(cirq.X, cirq.S)
print("Original Circuit:", circuit, "\n", sep="\n")
print("Reversed Circuit with deep=True:", reverse_circuit_deep(circuit, context=context), "\n", sep="\n")
print("Substituted Circuit with deep=True:", substitute_gate_deep(circuit, context=context), sep="\n")
###Output
_____no_output_____
###Markdown
Transformer Primitives and Decompositions Moment preserving transformer primitivesCirq provides useful abstractions to implement common transformer patterns, while preserving the moment structure of input circuit. Some of the notable transformer primitives are:- **`cirq.map_operations`**: Applies local transformations on operations, by calling `map_func(op)` for each `op`.- **`cirq.map_moments`**: Applies local transformation on moments, by calling `map_func(m)` for each moment `m`.- **`cirq.merge_operations`**: Merges connected component of operations by iteratively calling `merge_func(op1, op2)` for every pair of mergeable operations `op1` and `op2`.- **`cirq.merge_moments`**: Merges adjacent moments, from left to right, by iteratively calling `merge_func(m1, m2)` for adjacent moments `m1` and `m2`.An important property of these primitives is that they have support for common configurable options present in `cirq.TransformerContext`, such as `tags_to_ignore` and `deep`. See the example below for a better understanding.
###Code
@cirq.transformer
def substitute_gate_using_primitives(circuit, *, context = None, source = cirq.X, target= cirq.S):
"""Transformer to substitute `source` gates with `target` in the input circuit.
The transformer is implemented using `cirq.map_operations` primitive and hence
has built-in support for
1. Recursively running the transformer on sub-circuits if `context.deep is True`.
2. Ignoring operations tagged with any of `context.tags_to_ignore`.
"""
return cirq.map_operations(
circuit,
map_func=lambda op, _: target.on(*op.qubits) if op.gate == source else op,
deep = context.deep if context else False,
tags_to_ignore=context.tags_to_ignore if context else ()
)
x_y_x = [cirq.X(q), cirq.Y(q), cirq.X(q).with_tags("ignore")]
circuit = cirq.Circuit(x_y_x, cirq.CircuitOperation(cirq.FrozenCircuit(x_y_x)), x_y_x)
context = cirq.TransformerContext(deep=True, tags_to_ignore=("ignore",))
print("Original Circuit:", circuit, "\n", sep="\n")
print("Substituted Circuit:", substitute_gate_using_primitives(circuit, context=context), "\n", sep="\n")
###Output
_____no_output_____
###Markdown
Analytical Gate DecompositionsGate decomposition is the process of implementing / decomposing a given unitary `U` using only gates that belong to a specific target gateset. Cirq provides many analytical decomposition methods, often based on [KAK Decomposition](https://arxiv.org/abs/quant-ph/0507171), to decompose two qubit unitaries into specific target gatesets. Some notable decompositions are:* **`cirq.single_qubit_matrix_to_pauli_rotations`**: Decomposes a single qubit matrix to ZPow/XPow/YPow rotations. * **`cirq.single_qubit_matrix_to_phased_x_z`**: Decomposes a single-qubit matrix to a PhasedX and Z gate.* **`cirq.two_qubit_matrix_to_sqrt_iswap_operations`**: Decomposes any two-qubit unitary matrix into ZPow/XPow/YPow/sqrt-iSWAP gates.* **`cirq.two_qubit_matrix_to_cz_operations`**: Decomposes any two-qubit unitary matrix into ZPow/XPow/YPow/CZ gates.* **`cirq.three_qubit_matrix_to_operations`**: Decomposes any three-qubit unitary matrix into CZ/CNOT and single qubit rotations.You can use these analytical decomposition methods to build transformers which can rewrite a given circuit using only gates from the target gateset.
###Code
@cirq.transformer
def convert_to_cz_target(circuit, *, context=None, atol=1e-8, allow_partial_czs=True):
"""Transformer to rewrite the given circuit using CZs + 1-qubit rotations.
Note that the transformer decomposes only operations on <= 2-qubits and is
presented as an illustration of using transformer primitives + analytical
decomposition methods.
"""
def map_func(op: cirq.Operation, _) -> cirq.OP_TREE:
if not (cirq.has_unitary(op) and cirq.num_qubits(op) <= 2):
return op
mat = cirq.unitary(op)
q = op.qubits
if cirq.num_qubits(op) == 1:
g = cirq.single_qubit_matrix_to_phxz(mat)
return g.on(*q) if g else []
return cirq.two_qubit_matrix_to_cz_operations(*q, mat, allow_partial_czs=allow_partial_czs, atol=atol)
return cirq.map_operations_and_unroll(
circuit,
map_func,
deep=context.deep if context else False,
tags_to_ignore=context.tags_to_ignore if context else ()
)
circuit = cirq.testing.random_circuit(qubits=3, n_moments=5, op_density=0.8, random_state=1234)
converted_circuit = convert_to_cz_target(circuit)
cirq.testing.assert_circuits_with_terminal_measurements_are_equivalent(circuit, converted_circuit)
print(f"Original Circuit", circuit, "\n", sep="\n")
print(f"Circuit compiled for CZ Target Gateset using custom transformer", converted_circuit, "\n", sep="\n")
###Output
_____no_output_____ |
notebooks/notebook-test.ipynb | ###Markdown
Intro2ML - Notebook testThis notebook serves as a quick to test to check that Google Colab works (or you local install) and the initial setup is completed. You will need a **Google account** to use Colab. They are freely available and can be tied to any email address: https://myaccount.google.com/intro[](https://colab.research.google.com/github/mj-will/intro2ml/blob/master/notebooks/notebook-test.ipynb) Import the modules used in the notebooks, if they all import you should be good to go.
###Code
import numpy
import six.moves
import pandas
import seaborn
import matplotlib.pyplot
import sklearn
import tensorflow.keras
print(f'Tensorflow version {tensorflow.keras.__version__}')
print('Everything imported correctly! Goood to go!')
###Output
Tensorflow version 2.2.4-tf
Everything imported correctly! Goood to go!
|
HW4/Q3.3.ipynb | ###Markdown
Multivariate Gaussian function$f(\mathbf{x}, \boldsymbol{\mu}, \Sigma) = (2\pi)^{-M/2}|\Sigma|^{-1/2}~e^{\frac{-1}{2}(\mathbf{x}-\boldsymbol{\mu})^T\Sigma^{-1}(\mathbf{x}-\boldsymbol{\mu})}$
###Code
def gaussian(x, mu, cov):
# x and mu should be vectors in numpy, shape=(2,)
# cov should be a matrix in numpy, shape=(2,2)
M = 2
scale = (2*np.pi)**(-M/2)*np.linalg.det(cov)**(-1/2)
return scale*np.exp(-(1/2)*(x-mu).T @ np.linalg.inv(cov) @ (x-mu))
###Output
_____no_output_____
###Markdown
Plot Gaussian contours function
###Code
def plot_gaussian(mu, cov, x1_min=-10, x1_max=10, x2_min=-10, x2_max=10, color=None):
# x and mu should be vectors in numpy, shape=(2,)
# cov should be a matrix in numpy, shape=(2,2)
x1_values = np.linspace(x1_min, x1_max, 101)
x2_values = np.linspace(x2_min, x2_max, 101)
x1_grid, x2_grid = np.meshgrid(x1_values,x2_values)
M,N = x1_grid.shape
y_grid = np.zeros((M,N))
x = np.zeros((2,))
for i in range(M):
for j in range(N):
x[0] = x1_grid[i,j]
x[1] = x2_grid[i,j]
y_grid[i,j] = gaussian(x, mu, cov)
plt.contour(x1_grid, x2_grid, y_grid, colors=color)
###Output
_____no_output_____
###Markdown
Load dataNote: The code assumes that the data file is in the same folder as the jupyter notebook. In Google colab, you can upload the file directly into the workspace by in the Files tab on the left.
###Code
X = np.loadtxt("./gmm_data.csv", delimiter=",")
print(X.shape)
###Output
(2000, 2)
###Markdown
Gaussian mixture model Hyperparameters
###Code
# N: number of observations
# D: number of features of each observation
# K: number of classes
# T: number of iterations
N, D = X.shape
K = 4
T = 1
# T = 30
# T = 100
###Output
_____no_output_____
###Markdown
Parameter initialization
###Code
# pi_0: shape=(K,), pi_0[k] = 1 / K
# mu_0: shape=(K, D), mu_0[k] = np.full(D, k+1)
# sigma_0: shape=(K, D, D), sigma_0[k] = np.eye(D)
pi_0 = np.full(K, 1 / K)
mu_0 = np.tile(np.arange(1, K+1), (D, 1)).T
sigma_0 = np.tile(np.eye(D), (K, 1, 1))
###Output
_____no_output_____
###Markdown
EM update
###Code
# pi: shape=(K,), pi[k] = p(z_k^(i) = 1)
# mu: shape=(K, D), mu[k] is the centroid of cluster k
# sigma: shape=(K, D, D), sigma[k] is the covariance matrix of cluster k
# eta: shape=(K, N), eta[k, i] = p(z_k^(i) = 2 | x^(i), mu_k^(t), sigma_k^(t))
# n: shape=(K,), n[k] = N_k^(t)
pi = pi_0.copy()
mu = mu_0.copy()
sigma = sigma_0.copy()
eta = np.zeros((K, N))
n = np.zeros(K)
for _ in range(T):
# Expectation
for i, x_i in enumerate(X):
for k, (pi_k, mu_k, sigma_k) in enumerate(zip(pi, mu, sigma)):
eta[k, i] = pi_k * gaussian(x_i, mu_k, sigma_k)
eta = eta / np.einsum('ki->i', eta)
# Maximization
n = np.einsum('ki->k', eta)
pi = n / N
mu = np.einsum('ki,id->kd', eta, X) / n[:, None]
# Compute centered X: Xc = X - mu
Xc = X[:, None, :] - mu[None, :, :]
sigma = np.einsum('ki,ikd,ike->kde', eta, Xc, Xc) / n[:, None, None]
###Output
_____no_output_____
###Markdown
Plot data with initial Gaussian contours
###Code
# Square figure size
plt.figure(figsize=(8,8))
# Plot points
plt.plot(X[:,0], X[:,1], 'o', markerfacecolor="None", alpha=0.3)
# Plot K Gaussians
colors = ['tab:orange', 'tab:green', 'tab:red', 'tab:purple']
for k in range(K):
plot_gaussian(mu_0[k], sigma_0[k], color=colors[k])
# Axes
plt.gca().axhline(y=0, color='gray')
plt.gca().axvline(x=0, color='gray')
# Labels
plt.xlabel("$x_1$", fontsize=20)
plt.ylabel("$x_2$", fontsize=20)
plt.title("Iteration 0", fontsize=20)
###Output
_____no_output_____
###Markdown
Plot data with updated Gaussian contours
###Code
# Square figure size
plt.figure(figsize=(8,8))
# Plot points
plt.plot(X[:,0], X[:,1], 'o', markerfacecolor="None", alpha=0.3)
# Plot K Gaussians
colors = ['tab:orange', 'tab:green', 'tab:red', 'tab:purple']
for k in range(K):
plot_gaussian(mu[k], sigma[k], color=colors[k])
# Axes
plt.gca().axhline(y=0, color='gray')
plt.gca().axvline(x=0, color='gray')
# Labels
plt.xlabel("$x_1$", fontsize=20)
plt.ylabel("$x_2$", fontsize=20)
plt.title(f"Iteration {T}", fontsize=20)
###Output
_____no_output_____ |
Week04/Homework02.ipynb | ###Markdown
Your name here. Your Workshop section here. Homework 2: Control Structures ** Submit this notebook to bCourses to receive credit for this assignment. **Please complete this homework assignment in code cells in the iPython notebook. Include comments in your code when necessary. Enter your name in the cell at the top of the notebook, and rename the notebook [email_name]_HW02.ipynb, where [email_name] is the part of your UCB email address that precedes "@berkeley.edu". Please also save the notebook once you have executed it as a PDF and upload that to bcourses as well (note, that when saving as PDF you don't want to use the option with latex because it crashes, but rather the one to save it directly as a PDF). Problem 1: Binomial Coefficients[Adapted from Newman, Exercise 2.11] The binomial coefficient $n \choose k$ is an integer equal to$$ {n \choose k} = \frac{n!}{k!(n-k)!} = \frac{n \times (n-1) \times (n-2) \times \cdots \times (n-k + 1)}{1 \times 2 \times \cdots \times k} $$when $k \geq 1$, or ${n \choose 0} = 1$ when $k=0$. (The special case $k=0$ can be included in the general definition by using the conventional definition $0! \equiv 1$.)1. Write a function `factorial(n)` that takes an integer $n$ and returns $n!$ as an integer. It should yield $1$ when $n=0$. You may assume that the argument will also be an integer greater than or equal to 0.1. Using the form of the binomial coefficient given above, write a function `binomial(n,k)` that calculates the binomial coefficient for given $n$ and $k$. Make sure your function returns the answer in the form of an integer (not a float) and gives the correct value of 1 for the case where $k=0$. (Hint: Use your `factorial` function from Part 1.)1. Using your `binomial` function, write a function `pascals_triangle(N)` to print out the first $N$ lines of "Pascal's triangle" (starting with the $0$th line). The $n$th line of Pascal's triangle contains $n+1$ numbers, which are the coefficients $n \choose 0$, $n \choose 1$, and so on up to $n \choose n$. Thus the first few lines are 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 This would be the result of `pascals_triangle(5)`. Print the first 10 rows of Pascal's triangle. 1. The probability that an ubiased coin, tossed $n$ times, will come up heads $k$ times is ${n \choose k} / 2^n$. (Or instead of coins, perhaps you'd prefer to think of spins measured in a [Stern-Gerlach experiment](https://en.wikipedia.org/wiki/Stern%E2%80%93Gerlach_experiment).) - Write a function `heads_exactly(n,k)` to calculate the probability that a coin tossed $n$ times comes up heads exactly $k$ times. - Write a function `heads_atleast(n,k)` to calculate the probability that a coin tossed $n$ times comes up heads $k$ or more times. - Print the probabilities (to four decimal places) that a coin tossed 100 times comes up heads exactly 70 times, and at least 70 times. You should print corresponding statements with the numbers so it is clear what they each mean.1. Along with the printed statements from Part 4, have your code generate and display two labelled plots for `heads_exactly(n,k)` and `heads_atleast(n,k)` with $n=100$. You should have values of $k$ on the $x$-axis, and probabilities on the $y$-axis. (Note that $k$ only takes integer values from 0 to $n$, inclusive. Your plots can be connected curves or have discrete markers for each point; either is fine.) OutputTo summarize, your program should output the following things:1. The first 10 rows of Pascal's triangle1. The probabilities (to three decimal places) that a coin tossed 100 times comes up heads exactly 70 times, and at least 70 times, with corresponding statements so it is clear what each number signifies.1. Two labeled plots for `heads_exactly(n,k)` and `heads_atleast(n,k)` with $n=100$, representing probability distributions for 100 coin flips. ReminderRemember to write informative doc strings, comment your code, and use descriptive function and variable names so others (and future you) can understand what you're doing!
###Code
'''The numpy library has a lot of useful functions
and we always use matplotlib for plotting, so it's
generally a good idea to import them at the beginning.'''
import numpy as np
import matplotlib.pyplot as plt
def factorial(n):
"""Returns the factorial of n"""
return_value = 1
#Try using a for loop to update the return_value and calculate n!
return return_value
def binomial(n, k):
"""Returns the binomial coefficient n choose k"""
#Use a conditional statement to return 1 in the case k = 0
return #Use factorial(n) to calculate the binomial coefficient
def pascals_triangle(N):
"""Prints out N rows of pascal's triangle"""
#A "double for loop" has been set up below;
#Python goes through the entire inner loop during each pass through the outer loop
for row in range(0, N + 1): #This is the outer loop; each pass through the loop corresponds to one row of the triangle
for k in range(0, row + 1): #This is is the inner loop; each pass through the loop corresponds to a number on the row
#Code here is part of each inner loop iteration (i.e. print a binomial coefficient)
#Code here is part of the outer loop
#This function doesn't need to return anything
def heads_exactly(n,k):
"""Returns the probability of getting k heads if you flip a coin n times"""
return #Use binomial(n,k) to calculate the probability
def heads_atleast(n,k):
"""Returns the probability of getting at least k heads if you flip a coin n times"""
total_prob = 0
#Use a for loop and heads_exactly(n,k) to update total_prob
return total_prob
#Now use your defined functions to produce the desired outputs
#For the plots, the np.arange() function is useful for creating a numpy array of integers
k_values = np.arange(1,101) #integers from 1 to 100 (lower bound is inclusive; upper bound is exclusive)
###Output
_____no_output_____
###Markdown
Problem 2: Semi-Empirical Mass Formula[Adapted from Newman, Exercise 2.10] In nuclear physics, the semi-empirical mass formula is a formula for calculating the approximte nuclear binding energy $B$ of an atomic nucleus with atomic number $Z$ and mass number $A$:$$ B = a_V A - a_S A^{2/3} - a_C \frac{Z^2}{A^{1/3}} - a_S \frac{(A-2Z)^2}{A} + \delta\frac{a_P}{A^{1/2}}, $$where, in units of millions of electron volts (MeV), the constants are $a_V = 14.64$, $a_S = 14.08$, $a_C = 0.64$, $a_S = 21.07$, $a_P=11.54$, and$$ \delta = \begin{cases}0 & \text{if } A \text{ is odd,}\\+1 & \text{if } A \text{ and } Z \text{ are both even,} \\-1 & \text{if } A \text{ is even and } Z \text{ is odd.}\end{cases} $$The values above are taken from D. Benzaid et al., NUCL SCI TECH 31, 9 (2020); https://doi.org/10.1007/s41365-019-0718-81. Write a function `binding_energy(A, Z)` that takes as its input the values of $A$ and $Z$, and returns the binding energy for the corresponding atom. Check your function by computing the binding energy of an atom with $A = 58$ and $Z = 28$. (Hint: The correct answer is around 490 MeV.)1. Write a function `binding_energy_per_nucleon(A, Z)` which returns not the total binding energy $B$, but the binding energy per nucleon, which is $B/A$.1. Write a function `max_binding_energy_per_nucleon(Z)` which takes as input just a single value of the atomic number $Z$ and then goes through all values of $A$ from $A = Z$ to $A = 3Z$, to find the one that has the largest binding energy per nucleon. This is the most stable nucleus with the given atomic number. Have your function return the value of $A$ for this most stable nucleus and the value of the binding energy per nucleon.1. Finally, use the functions you've written to write a program which runs through all values of $Z$ from 1 to 100 and prints out the most stable value of $A$ for each one. At what value of $Z$ does the maxium binding energy per nucleon occur? (The true answer, in real life, is $Z = 28$, which is nickel. You should find that the semi-empirical mass formula gets the answer roughly right, but not exactly.) OutputYour final output should look like Z = 1 : most stable A is 2 Z = 2 : most stable A is 4 . . . Z = 10 : most stable A is 20 Z = 11 : most stable A is 23 . . . Z = 100 : most stable A is 210 The most stable Z is ____ with binding energy per nucleon ____With the ...'s and ____'s replaced with your results. The binding energy per nucleon in the last line should have three decimal places.For maximum readability, you should include the extra whitespace around the $Z =$ numbers so everything lines up, as shown. (To remember the `print` formatting syntax to do this, see Table 1.1 in the Ayars text.) ReminderRemember to write informative doc strings, comment your code, and use descriptive function and variable names so others (and future you) can understand what you're doing!
###Code
import numpy as np
def binding_energy(A, Z):
"""Returns the nuclear binding energy in MeV of an atomic nucleus with atomic number Z and mass number A"""
aV = 14.64
aS = 14.08
aC = 0.64
aS = 21.07
aP = 11.54
#Use conditional statements (if, elif, else) to declare the variable delta with the appropriate value
return #Use the above formula for B, the binding energy
#Now check your function by calculating the requested binding energy
def binding_energy_per_nucleon(A, Z):
"""Returns the nuclear binding energy per nucleon in MeV of an atomic nucleus with atomic number Z and mass number A"""
return #Use binding_energy(A, Z) and the number of nucleons
def max_binding_energy_per_nucleon(Z):
"""For atomic nucleus with atomic number Z, returns that mass number A that yields that maximum binding energy
per nucleon, as well as that resultant maximum binding energy per nucleon in MeV"""
#We can make our default return value A = Z and the corresponding binding energy
max_A = Z
max_binding_energy_per_nucleon = binding_energy_per_nucleon(Z, Z)
#Use a for loop to go from A = Z to A = 3*Z, and update the return variables if a new maximum is found
#A conditional statement within the loop is useful for comparing max_binding_energy_per_nucleon to a potential new maximum
return max_A, max_binding_energy_per_nucleon
#Now use a for loop and the function max_binding_energy_per_nucleon(Z) to print the final output
###Output
_____no_output_____
###Markdown
Problem 3: Particle in a Box[Adapted from Ayars, Problem 3-1] The energy levels for a quantum particle in a three-dimensional rectangular box of dimensions $\{L_1, L_2, \text{ and } L_3\}$ are given by$$ E_{n_1, n_2, n_3} = \frac{\hbar^2 \pi^2}{2m} \left[ \frac{n_1^2}{L_1^2} + \frac{n_2^2}{L_2^2} + \frac{n_3^2}{L_3^2} \right] $$where the $n$'s are integers greater than or equal to one. Your goal is to write a program that will calculate, and list in order of increasing energy, the values of the $n$'s for the 10 lowest *different* energy levels, given a box for which $L_2 = 2L_1$ and $L_3 = 4L_1$.Your program should include two user-defined functions that you may find helpful in accomplishing your goal:1. A function `energy(n1, n2, n3)` that takes integer values $n_1$, $n_2$, and $n_3$, and computes the corresponding energy level in units of $\hbar^2 \pi^2/2 m L_1^2$.1. A function `lowest_unique_K(K, List)` which takes a positive integer $K$ and a list of real numbers `List`, and returns an ordered (ascending) list of the lowest $K$ unique numbers in the list `List`. For instance, `lowest_unique_K(3, [-0.5, 3, 3, 2, 6, 7, 7])` would return `[-0.5, 2, 3]`. The function should not modify the original list `List`. - As with most programming puzzles, there are several ways to write this function. Depending on how you do it, you may or may not find it helpful to Google how to "sort" lists, or how to "del" or "pop" items out of lists. You may also wish to make other user-defined functions depending on how you go about solving the problem. In fact, if you find some clever way to solve the problem that doesn't use `lowest_unique_K`, that is fine too! (You still need to write `lowest_unique_K`, though.) But whatever you do, be sure to comment your code clearly! OutputYour final output should look like this (though with different numbers, and not necessarily the same number of lines): energy, n1, n2, n3 (0.4375, 1, 1, 1) (0.625, 1, 2, 1) (0.8125, 2, 1, 1) (0.9375, 1, 3, 1) (1.0, 2, 2, 1) (1.1875, 1, 1, 2) (1.3125, 2, 3, 1) (1.375, 1, 2, 2) (1.375, 1, 4, 1) (1.4375, 3, 1, 1) (1.5625, 2, 1, 2)Notice how there are only 10 unique energies listed, but more than 10 lines. Each line could also have brackets instead of parentheses if you prefer, like this: `[0.4375, 1, 1, 1]`. ReminderRemember to write informative doc strings, comment your code, and use descriptive function and variable names so others (and future you) can understand what you're doing! Just for funIf you'd like, write a function `print_table(list_of_lists)` that takes a list of lists (or a list of tuples) and prints them in a nicely aligned table. Feel free to Google to get ideas on how to do this. Try to get your function to produce something like energy n1 n2 n3 0.4375 1 1 1 0.625 1 2 1 0.8125 2 1 1 0.9375 1 3 1 1.0 2 2 1 1.1875 1 1 2 1.3125 2 3 1 1.375 1 2 2 1.375 1 4 1 1.4375 3 1 1 1.5625 2 1 2
###Code
def energy(n1, n2, n3):
"""Returns the n-dependent coefficient of the particle-in-a-3D-box energy level for quantum numbers n1, n2, and n3.
The box's lengths along dimensions 1, 2, and 3 go as L, 2*L, 4*L"""
return #Use the formula given above
import copy
def lowest_unique_K(K, List):
"""Takes a positive integer K and a list of real numbers List, and returns an ordered (ascending) list of the lowest K unique numbers in the list List"""
lowest_unique_K_list = [0] * K #This is a list of zeros (K of them)
#Or you may want to start with lowest_unique_K_list = [], an empty list
copied_list = copy.copy(List) #This gives us a copy of List; any changes you make to copied_list will not affect List
#There's a lot of different ways to approach writing this function
#Try breaking it up into smaller steps and figure out what you'd like to do before writing any code
#If you have trouble turning logical steps into actual code, feel free to ask for help
return lowest_unique_K_list
#Now create a list of energies for different values of n1, n2, n3 (taking each from 1 to 10 should be sufficient)
#Remember to keep track of the corresponding n1, n2, n3 values for each energy, since we need to print them
#Then use lowest_unique_K(10, List) on this list of energies to find the first 10
#Finally, print these 10 energies and their corresponding n1, n2, n3 values
#You may find a dictionary helpful for keeping track of the association between energy values and n values
###Output
_____no_output_____
###Markdown
Your name here. Your Workshop section here. Homework 2: Control Structures ** Submit this notebook to bCourses to receive credit for this assignment. **Please complete this homework assignment in code cells in the iPython notebook. Include comments in your code when necessary. Enter your name in the cell at the top of the notebook, and rename the notebook [email_name]_HW02.ipynb, where [email_name] is the part of your UCB email address that precedes "@berkeley.edu". Please also save the notebook once you have executed it as a PDF and upload that to bcourses as well (note, that when saving as PDF you don't want to use the option with latex because it crashes, but rather the one to save it directly as a PDF). Problem 1: Binomial Coefficients[Adapted from Newman, Exercise 2.11] The binomial coefficient $n \choose k$ is an integer equal to$$ {n \choose k} = \frac{n!}{k!(n-k)!} = \frac{n \times (n-1) \times (n-2) \times \cdots \times (n-k + 1)}{1 \times 2 \times \cdots \times k} $$when $k \geq 1$, or ${n \choose 0} = 1$ when $k=0$. (The special case $k=0$ can be included in the general definition by using the conventional definition $0! \equiv 1$.)1. Write a function `factorial(n)` that takes an integer $n$ and returns $n!$ as an integer. It should yield $1$ when $n=0$. You may assume that the argument will also be an integer greater than or equal to 0.1. Using the form of the binomial coefficient given above, write a function `binomial(n,k)` that calculates the binomial coefficient for given $n$ and $k$. Make sure your function returns the answer in the form of an integer (not a float) and gives the correct value of 1 for the case where $k=0$. (Hint: Use your `factorial` function from Part 1.)1. Using your `binomial` function, write a function `pascals_triangle(N)` to print out the first $N$ lines of "Pascal's triangle" (starting with the $0$th line). The $n$th line of Pascal's triangle contains $n+1$ numbers, which are the coefficients $n \choose 0$, $n \choose 1$, and so on up to $n \choose n$. Thus the first few lines are 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 This would be the result of `pascals_triangle(5)`. Print the first 15 rows of Pascal's triangle. 1. The probability that an ubiased coin, tossed $n$ times, will come up heads $k$ times is ${n \choose k} / 2^n$. (Or instead of coins, perhaps you'd prefer to think of spins measured in a [Stern-Gerlach experiment](https://en.wikipedia.org/wiki/Stern%E2%80%93Gerlach_experiment).) - Write a function `heads_exactly(n,k)` to calculate the probability that a coin tossed $n$ times comes up heads exactly $k$ times. - Write a function `heads_atleast(n,k)` to calculate the probability that a coin tossed $n$ times comes up heads $k$ or more times. - Print the probabilities (to three decimal places) that a coin tossed 100 times comes up heads exactly 60 times, and at least 60 times. You should print corresponding statements with the numbers so it is clear what they each mean.1. Along with the printed statements from Part 4, have your code generate and display two labelled plots for `heads_exactly(n,k)` and `heads_atleast(n,k)` with $n=100$. You should have values of $k$ on the $x$-axis, and probabilities on the $y$-axis. (Note that $k$ only takes integer values from 0 to $n$, inclusive. Your plots can be connected curves or have discrete markers for each point; either is fine.) OutputTo summarize, your program should output the following things:1. The first 15 rows of Pascal's triangle1. The probabilities (to three decimal places) that a coin tossed 100 times comes up heads exactly 60 times, and at least 60 times, with corresponding statements so it is clear what each number signifies.1. Two labeled plots for `heads_exactly(n,k)` and `heads_atleast(n,k)` with $n=100$, representing probability distributions for 100 coin flips. ReminderRemember to write informative doc strings, comment your code, and use descriptive function and variable names so others (and future you) can understand what you're doing!
###Code
'''The numpy library has a lot of useful functions
and we always use matplotlib for plotting, so it's
generally a good idea to import them at the beginning.'''
import numpy as np
import matplotlib.pyplot as plt
def factorial(n):
"""Returns the factorial of n"""
return_value = 1
#Try using a for loop to update the return_value and calculate n!
return return_value
def binomial(n, k):
"""Returns the binomial coefficient n choose k"""
#Use a conditional statement to return 1 in the case k = 0
return #Use factorial(n) to calculate the binomial coefficient
def pascals_triangle(N):
"""Prints out N rows of pascal's triangle"""
#A "double for loop" has been set up below;
#Python goes through the entire inner loop during each pass through the outer loop
for row in range(0, N + 1): #This is the outer loop; each pass through the loop corresponds to one row of the triangle
for k in range(0, row + 1): #This is is the inner loop; each pass through the loop corresponds to a number on the row
#Code here is part of each inner loop iteration (i.e. print a binomial coefficient)
#Code here is part of the outer loop
#This function doesn't need to return anything
def heads_exactly(n,k):
"""Returns the probability of getting k heads if you flip a coin n times"""
return #Use binomial(n,k) to calculate the probability
def heads_atleast(n,k):
"""Returns the probability of getting at least k heads if you flip a coin n times"""
total_prob = 0
#Use a for loop and heads_exactly(n,k) to update total_prob
return total_prob
#Now use your defined functions to produce the desired outputs
#For the plots, the np.arange() function is useful for creating a numpy array of integers
k_values = np.arange(1,101) #integers from 1 to 100 (lower bound is inclusive; upper bound is exclusive)
###Output
_____no_output_____
###Markdown
Problem 2: Semi-Empirical Mass Formula[Adapted from Newman, Exercise 2.10] In nuclear physics, the semi-empirical mass formula is a formula for calculating the approximte nuclear binding energy $B$ of an atomic nucleus with atomic number $Z$ and mass number $A$:$$ B = a_V A - a_S A^{2/3} - a_C \frac{Z^2}{A^{1/3}} - a_S \frac{(A-2Z)^2}{A} + \delta\frac{a_P}{A^{1/2}}, $$where, in units of millions of electron volts (MeV), the constants are $a_V = 14.64$, $a_S = 14.08$, $a_C = 0.64$, $a_S = 21.07$, $a_P=11.54$, and$$ \delta = \begin{cases}0 & \text{if } A \text{ is odd,}\\+1 & \text{if } A \text{ and } Z \text{ are both even,} \\-1 & \text{if } A \text{ is even and } Z \text{ is odd.}\end{cases} $$The values above are taken from D. Benzaid et al., NUCL SCI TECH 31, 9 (2020); https://doi.org/10.1007/s41365-019-0718-81. Write a function `binding_energy(A, Z)` that takes as its input the values of $A$ and $Z$, and returns the binding energy for the corresponding atom. Check your function by computing the binding energy of an atom with $A = 58$ and $Z = 28$. (Hint: The correct answer is around 490 MeV.)1. Write a function `binding_energy_per_nucleon(A, Z)` which returns not the total binding energy $B$, but the binding energy per nucleon, which is $B/A$.1. Write a function `max_binding_energy_per_nucleon(Z)` which takes as input just a single value of the atomic number $Z$ and then goes through all values of $A$ from $A = Z$ to $A = 3Z$, to find the one that has the largest binding energy per nucleon. This is the most stable nucleus with the given atomic number. Have your function return the value of $A$ for this most stable nucleus and the value of the binding energy per nucleon.1. Finally, use the functions you've written to write a program which runs through all values of $Z$ from 1 to 100 and prints out the most stable value of $A$ for each one. At what value of $Z$ does the maxium binding energy per nucleon occur? (The true answer, in real life, is $Z = 28$, which is nickel. You should find that the semi-empirical mass formula gets the answer roughly right, but not exactly.) OutputYour final output should look like Z = 1 : most stable A is 2 Z = 2 : most stable A is 4 . . . Z = 10 : most stable A is 20 Z = 11 : most stable A is 23 . . . Z = 100 : most stable A is 210 The most stable Z is ____ with binding energy per nucleon ____With the ...'s and ____'s replaced with your results. The binding energy per nucleon in the last line should have three decimal places.For maximum readability, you should include the extra whitespace around the $Z =$ numbers so everything lines up, as shown. (To remember the `print` formatting syntax to do this, see Table 1.1 in the Ayars text.) ReminderRemember to write informative doc strings, comment your code, and use descriptive function and variable names so others (and future you) can understand what you're doing!
###Code
import numpy as np
def binding_energy(A, Z):
"""Returns the nuclear binding energy in MeV of an atomic nucleus with atomic number Z and mass number A"""
aV = 14.64
aS = 14.08
aC = 0.64
aS = 21.07
aP = 11.54
#Use conditional statements (if, elif, else) to declare the variable delta with the appropriate value
return #Use the above formula for B, the binding energy
#Now check your function by calculating the requested binding energy
def binding_energy_per_nucleon(A, Z):
"""Returns the nuclear binding energy per nucleon in MeV of an atomic nucleus with atomic number Z and mass number A"""
return #Use binding_energy(A, Z) and the number of nucleons
def max_binding_energy_per_nucleon(Z):
"""For atomic nucleus with atomic number Z, returns that mass number A that yields that maximum binding energy
per nucleon, as well as that resultant maximum binding energy per nucleon in MeV"""
#We can make our default return value A = Z and the corresponding binding energy
max_A = Z
max_binding_energy_per_nucleon = binding_energy_per_nucleon(Z, Z)
#Use a for loop to go from A = Z to A = 3*Z, and update the return variables if a new maximum is found
#A conditional statement within the loop is useful for comparing max_binding_energy_per_nucleon to a potential new maximum
return max_A, max_binding_energy_per_nucleon
#Now use a for loop and the function max_binding_energy_per_nucleon(Z) to print the final output
###Output
_____no_output_____
###Markdown
Problem 3: Particle in a Box[Adapted from Ayars, Problem 3-1] The energy levels for a quantum particle in a three-dimensional rectangular box of dimensions $\{L_1, L_2, \text{ and } L_3\}$ are given by$$ E_{n_1, n_2, n_3} = \frac{\hbar^2 \pi^2}{2m} \left[ \frac{n_1^2}{L_1^2} + \frac{n_2^2}{L_2^2} + \frac{n_3^2}{L_3^2} \right] $$where the $n$'s are integers greater than or equal to one. Your goal is to write a program that will calculate, and list in order of increasing energy, the values of the $n$'s for the 10 lowest *different* energy levels, given a box for which $L_2 = 2L_1$ and $L_3 = 4L_1$.Your program should include two user-defined functions that you may find helpful in accomplishing your goal:1. A function `energy(n1, n2, n3)` that takes integer values $n_1$, $n_2$, and $n_3$, and computes the corresponding energy level in units of $\hbar^2 \pi^2/2 m L_1^2$.1. A function `lowest_unique_K(K, List)` which takes a positive integer $K$ and a list of real numbers `List`, and returns an ordered (ascending) list of the lowest $K$ unique numbers in the list `List`. For instance, `lowest_unique_K(3, [-0.5, 3, 3, 2, 6, 7, 7])` would return `[-0.5, 2, 3]`. The function should not modify the original list `List`. - As with most programming puzzles, there are several ways to write this function. Depending on how you do it, you may or may not find it helpful to Google how to "sort" lists, or how to "del" or "pop" items out of lists. You may also wish to make other user-defined functions depending on how you go about solving the problem. In fact, if you find some clever way to solve the problem that doesn't use `lowest_unique_K`, that is fine too! (You still need to write `lowest_unique_K`, though.) But whatever you do, be sure to comment your code clearly! OutputYour final output should look like this (though with different numbers, and not necessarily the same number of lines): energy, n1, n2, n3 (0.4375, 1, 1, 1) (0.625, 1, 2, 1) (0.8125, 2, 1, 1) (0.9375, 1, 3, 1) (1.0, 2, 2, 1) (1.1875, 1, 1, 2) (1.3125, 2, 3, 1) (1.375, 1, 2, 2) (1.375, 1, 4, 1) (1.4375, 3, 1, 1) (1.5625, 2, 1, 2)Notice how there are only 10 unique energies listed, but more than 10 lines. Each line could also have brackets instead of parentheses if you prefer, like this: `[0.4375, 1, 1, 1]`. ReminderRemember to write informative doc strings, comment your code, and use descriptive function and variable names so others (and future you) can understand what you're doing! Just for funIf you'd like, write a function `print_table(list_of_lists)` that takes a list of lists (or a list of tuples) and prints them in a nicely aligned table. Feel free to Google to get ideas on how to do this. Try to get your function to produce something like energy n1 n2 n3 0.4375 1 1 1 0.625 1 2 1 0.8125 2 1 1 0.9375 1 3 1 1.0 2 2 1 1.1875 1 1 2 1.3125 2 3 1 1.375 1 2 2 1.375 1 4 1 1.4375 3 1 1 1.5625 2 1 2
###Code
def energy(n1, n2, n3):
"""Returns the n-dependent coefficient of the particle-in-a-3D-box energy level for quantum numbers n1, n2, and n3.
The box's lengths along dimensions 1, 2, and 3 go as L, 2*L, 4*L"""
return #Use the formula given above
import copy
def lowest_unique_K(K, List):
"""Takes a positive integer K and a list of real numbers List, and returns an ordered (ascending) list of the lowest K unique numbers in the list List"""
lowest_unique_K_list = [0] * K #This is a list of zeros (K of them)
#Or you may want to start with lowest_unique_K_list = [], an empty list
copied_list = copy.copy(List) #This gives us a copy of List; any changes you make to copied_list will not affect List
#There's a lot of different ways to approach writing this function
#Try breaking it up into smaller steps and figure out what you'd like to do before writing any code
#If you have trouble turning logical steps into actual code, feel free to ask for help
return lowest_unique_K_list
#Now create a list of energies for different values of n1, n2, n3 (taking each from 1 to 10 should be sufficient)
#Remember to keep track of the corresponding n1, n2, n3 values for each energy, since we need to print them
#Then use lowest_unique_K(10, List) on this list of energies to find the first 10
#Finally, print these 10 energies and their corresponding n1, n2, n3 values
#You may find a dictionary helpful for keeping track of the association between energy values and n values
###Output
_____no_output_____ |
04 - Data Analysis With Pandas/assignments/Exercise_11_Solution.ipynb | ###Markdown
Coding Exercises (Part 2) Full Data Workflow A-Z: Cleaning Data Exercise 11: Cleaning messy Data Now, you will have the opportunity to analyze your own dataset. __Follow the instructions__ and insert your code! You are either requested to - Complete the Code and __Fill in the gaps__. Gaps are marked with "__---__" and are __placeholders__ for your code fragment. - Write Code completely __on your own__ In some exercises, you will find questions that can only be answered, if your code is correct and returns the right output! The correct answer is provided below your coding cell. There you can check whether your code is correct. If you need a hint, check the __Hints Section__ at the end of this Notebook. Exercises and Hints are numerated accordingly. If you need some further help or if you want to check your code, you can also check the __solutions notebook__. Have Fun! -------------------------------------------------------------------------------------------------------------- Option 1: Self_guided __Import__ the cars dataset from the csv-file __cars_unclean.csv__ and inspect. Then, __clean up__ the dataset:- Identify and handle __inconsistent data__- Each column/feature should have the __appropriate/most functional datatype__- Identify and handle __missing values__- Identify and handle __duplicates__- Have a closer look into columns with __strings__ and clean up- Identify and handle __erroneous outliers__ in numerical columns(hint: there might be a "fat finger" issue in one column and some value(s) in the mpg column could be in "gallons per mile" units)- __Save and export__ the cleaned dataset in a new csv-file (cars_clean.csv)- Change the datatype of appropriate columns to __categorical__. -------------------------- Option 2: Guided and Instructed STOP HERE, IF YOU WANT TO DO THE EXERCISE ON YOUR OWN! +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
###Code
# run the cell!
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# run the cell!
cars = pd.read_csv("cars_unclean.csv")
###Output
_____no_output_____
###Markdown
__Inspect__ the DataFrame and identify obviously __inconsistent data__!
###Code
# run the cell!
cars.head(20)
# run the cell!
cars.tail(10)
# run the cell!
cars.info()
###Output
_____no_output_____
###Markdown
85. __Identify__ one __column label__ that should be changed and adjust/__rename__ the column label! __Fill in the gaps__!
###Code
cars.rename(columns = {"model year": "model_year"}, inplace = True)
###Output
_____no_output_____
###Markdown
86. Have a closer look to the __origin__ column by analyzing the __frequency/count__ of unique values! Can you find __any inconsistency__?
###Code
cars.origin.value_counts()
###Output
_____no_output_____
###Markdown
There are the values ... usa and United States 87. __Replace__ the value __"United States"__ in the origin column! __Save__ the change!
###Code
cars.origin.replace("United States", "usa", inplace = True)
###Output
_____no_output_____
###Markdown
Inspect and __identify__ the __problem__ in the column __horsepower__!
###Code
# run the cell!
cars.horsepower.head()
###Output
_____no_output_____
###Markdown
Datatype should be ... numerical. But first of all, we need to remove...? 88. Apply the appropriate __string operation__ to __remove "hp"__ from the horsepower column! Pay attention to __whitespaces__! __Overwrite__ the horsepower column!
###Code
cars.horsepower = cars.horsepower.str.replace(" hp", "")
# run the cell and inspect!
cars.head()
###Output
_____no_output_____
###Markdown
Run and inspect, anything __strange__?
###Code
#run the cell!
pd.options.display.min_rows = None
# run the cell!
cars.horsepower.value_counts()
###Output
_____no_output_____
###Markdown
There are 6 entries with the value ... "Not available" 89. Create __"real" missing values__ in the column horsepower! __Save__ the change! __Fill in the gaps__!
###Code
cars.horsepower.replace("Not available", np.nan, inplace = True)
###Output
_____no_output_____
###Markdown
90. Now you can __convert the datatype__ in the column __horsepower__! __Overwrite__ the column!
###Code
cars.horsepower = cars.horsepower.astype("float")
###Output
_____no_output_____
###Markdown
Inspect!
###Code
# run the cell!
cars.info()
# run the cell!
cars.head(7)
###Output
_____no_output_____
###Markdown
Any __inconsistencies__ in the column __name__? Inspect one element!
###Code
#run the cell!
cars.loc[4, "name"]
###Output
_____no_output_____
###Markdown
It seems like some names are uppercase, while others are lowercase. And there are some excess whitespaces in the strings. 91. __Convert__ all names to __lowercase__ and __remove all whitespaces__ on the left ends and right ends!
###Code
cars.name = cars.name.str.lower().str.strip()
###Output
_____no_output_____
###Markdown
Run the next two cells and identify (erroneous) outliers in the numercial columns!
###Code
# run the cell!
cars.describe()
# run the cell!
cars.plot(subplots = True, figsize = (15,12))
plt.show()
###Output
_____no_output_____
###Markdown
92. Inspect the column __model_year__ in more detail by analyzing the __frequency/counts__ of unique values! Anything __strange__?
###Code
cars.model_year.value_counts()
###Output
_____no_output_____
###Markdown
There are 5 entries with ... 1973 instead of 73. 93. __Replace__ the value __1973__! __Save__ the change!
###Code
cars.model_year.replace(1973, 73, inplace = True)
###Output
_____no_output_____
###Markdown
94. Inspect the column __weight__ by __sorting__ the values from __high to low__. Can you see the __extreme value__?
###Code
cars.weight.sort_values(ascending = False)
###Output
_____no_output_____
###Markdown
The by far highest value is ... 23000 lbs. Must be an error! 95. __Select__ the complete __row__ of the outlier with the method __idxmax()__!
###Code
cars.loc[cars.weight.idxmax()]
###Output
_____no_output_____
###Markdown
It´s an opel manta ... could be a "fat finger" problem, weight could be 2300 instead of 23000. 96. __Overwrite__ the erroneous outlier! __Fill in the gaps__!
###Code
cars.loc[cars.weight.idxmax(), "weight"] = 2300
###Output
_____no_output_____
###Markdown
Inspect the column __mpg__! Any strange __outlier__?
###Code
# run the cell!
cars.mpg.sort_values()
###Output
_____no_output_____
###Markdown
An mpg of ... 0.060606 cannot be correct... 97. __Select__ the complete __row__ of the outlier with the method __idxmin()__!
###Code
cars.loc[cars.mpg.idxmin()]
###Output
_____no_output_____
###Markdown
98. After some research we have found out that this extreme value is in __"gallons per mile"__ units instead of "miles per gallon". __Convert__ to __"miles per gallon"__ units! __Fill in the gaps__!
###Code
cars.loc[cars.mpg.idxmin(), "mpg"] = 1/cars.loc[cars.mpg.idxmin(), "mpg"]
###Output
_____no_output_____
###Markdown
99. Next, select all __rows__ with at least one __missing__/na value! __Fill in the gaps__!
###Code
cars.loc[cars.isna().any(axis = 1)]
###Output
_____no_output_____
###Markdown
There are 6 cars, where the horsepower is unknown. 100. As horsepower is an important feature in the cars dataset, we decide to remove all 6 rows. __Remove__ and __save__ the change!
###Code
cars.dropna(inplace= True)
###Output
_____no_output_____
###Markdown
Now let´s find __duplicates__. First, we need to understand __which columns__ we have to take into consideration to identify duplicates. 101. The first __naive assumption__ is that two cars cannot have the __same name__. Let´s count the number of __name-duplicates__. __Fill in the gaps__!
###Code
cars.duplicated(subset = ["name"]).sum()
###Output
_____no_output_____
###Markdown
There are ... 86 potential duplicates to remove. 102. Let´s inspect the __duplicated pairs__ by selecting __all instances__ of a name duplicate! __Fill in the gaps__! Should the __name__ be the __only criteria__ to identify duplicates?
###Code
cars.loc[cars.duplicated(subset = ["name"], keep = False)].sort_values("name")
###Output
_____no_output_____
###Markdown
No! Cars can have several vintages/model_year and several variants with different technical specifications (e.g. weight, horsepower) 103. To be on the safe side, let´s include __all columns__ to identify duplicates. __Count__ the number of duplicates! __Fill in the gaps__!
###Code
cars.duplicated().sum()
###Output
_____no_output_____
###Markdown
There are ... 10 potential duplicates. 104. Let´s inspect the __duplicated pairs__ by selecting __all instances__ of a duplicate! __Fill in the gaps__!
###Code
cars.loc[cars.duplicated(keep = False)].sort_values("name")
###Output
_____no_output_____
###Markdown
All pairs seem to be real duplicates. 105. __Drop one instance__ of each duplicated pair! __Save__ the change!
###Code
cars.drop_duplicates(inplace = True)
# run the cell
cars.head()
# run the cell!
cars.info()
###Output
_____no_output_____
###Markdown
106. Our dataset seems to be pretty clean now! __Save__ and __export__ to a new csv-file (cars_clean.csv)! Do not export the RangeIndex!
###Code
cars.to_csv("cars_clean.csv", index= False)
###Output
_____no_output_____
###Markdown
Call the __describe()__ method on all __non-numerical columns__!
###Code
# run the cell!
cars.describe(include = "O")
###Output
_____no_output_____
###Markdown
Are there any __categorical features__ (only few unique values) where the datatype could be __converted to "category"__? 107. If so, __convert__ and __overwrite__ the column(s)!
###Code
cars.origin = cars.origin.astype("category")
###Output
_____no_output_____
###Markdown
__Inspect__. Did we __reduce memory usage__?
###Code
#run the cell!
cars.info()
###Output
_____no_output_____ |
nb/shower_table_established.ipynb | ###Markdown
$\S3$ Established showers The following uses an edited subset from `streamfulldata.csv` file from IAUMDC. We select showers with at least one record with $q<0.15$ AU and status=1 (established). Notes:1. Missing $a$, $e$, $q$ values are recalculated and are populated;2. the only skipped record is \618 which has only $q$ and has no reference;3. DLT has one optical record that is currently marked as "pro tempore" and is not included in the table below, but is considered in the paper.
###Code
import pandas as pd, numpy as np
mdc = pd.read_csv('../data/streamfulldata_established.csv', sep='\t', \
usecols=[3,4,7,8,9,12,13,14,15,16,17,18,22,24], \
names=['code', 'name', 'slon', 'RA', 'DEC', 'Vg', 'a', 'q', 'e', 'w', 'om', 'i', 'parent', 'tech'], \
dtype={'status':np.int8, 'slon':np.float64, 'RA':np.float64, 'DEC':np.float64, 'Vg':np.float64, \
'a':np.float64, 'q':np.float64})
pd.DataFrame(mdc)
def tj(a, e, i):
return 5.2/a + 2*np.sqrt((1-e**2)*a/5.2)*np.cos(np.deg2rad(i))
tj_series = tj(mdc['a'], mdc['e'], mdc['i'])
shr = np.sort(pd.unique(mdc['code']))
for shri in shr:
meanSlon = round(pd.DataFrame.mean(mdc['slon'].where(mdc['code'] == shri)), 1)
meanRA = round(pd.DataFrame.mean(mdc['RA'].where(mdc['code'] == shri)), 1)
meanDec = round(pd.DataFrame.mean(mdc['DEC'].where(mdc['code'] == shri)), 1)
meanVg = round(pd.DataFrame.mean(mdc['Vg'].where(mdc['code'] == shri)), 1)
meana = round(pd.DataFrame.mean(mdc['a'].where(mdc['code'] == shri)), 1)
meane = round(pd.DataFrame.mean(mdc['e'].where(mdc['code'] == shri)), 2)
meani = round(pd.DataFrame.mean(mdc['i'].where(mdc['code'] == shri)), 1)
meanTj = round(pd.DataFrame.mean(tj_series.where(mdc['code'] == shri)), 1)
print(shri, mdc['name'].where(mdc['code'] == shri).dropna(axis=0).iloc[0].strip(), '&', \
meanSlon, '$^\circ$ &', meanRA, '$^\circ$ &', meanDec, '$^\circ$ &', meanVg, '&', meana, '&', \
meane, '&', meani, '$^\circ$ &', meanTj)
###Output
AAN alpha Antliids & 313.1 $^\circ$ & 160.7 $^\circ$ & -11.9 $^\circ$ & 43.9 & 2.4 & 0.94 & 62.7 $^\circ$ & 2.5
ARI Daytime Arietids & 76.2 $^\circ$ & 42.5 $^\circ$ & 24.0 $^\circ$ & 38.2 & 1.9 & 0.95 & 24.6 $^\circ$ & 3.2
CTA chi Taurids & 220.5 $^\circ$ & 63.1 $^\circ$ & 25.4 $^\circ$ & 41.6 & 4.9 & 0.98 & 13.7 $^\circ$ & 1.4
DLT Daytime lambda Taurids & 85.5 $^\circ$ & 56.7 $^\circ$ & 11.5 $^\circ$ & 36.4 & 1.6 & 0.93 & 23.2 $^\circ$ & 3.7
DSX Daytime Sextantids & 186.7 $^\circ$ & 155.0 $^\circ$ & -1.6 $^\circ$ & 31.8 & 1.1 & 0.86 & 22.5 $^\circ$ & 5.1
EPG epsilon Pegasids & 108.6 $^\circ$ & 329.9 $^\circ$ & 14.5 $^\circ$ & 28.6 & 0.7 & 0.78 & 49.7 $^\circ$ & 7.3
EPR epsilon Perseids & 91.1 $^\circ$ & 55.7 $^\circ$ & 37.6 $^\circ$ & 44.3 & 7.3 & 0.98 & 57.1 $^\circ$ & 1.0
GEM Geminids & 261.6 $^\circ$ & 113.0 $^\circ$ & 32.3 $^\circ$ & 34.5 & 1.4 & 0.9 & 23.5 $^\circ$ & 4.2
JLE January Leonids & 282.5 $^\circ$ & 148.1 $^\circ$ & 23.9 $^\circ$ & 52.1 & 5.7 & 0.99 & 105.8 $^\circ$ & 0.8
KLE Daytime kappa Leonids & 182.1 $^\circ$ & 162.2 $^\circ$ & 15.3 $^\circ$ & 43.4 & 20.2 & 0.99 & 25.0 $^\circ$ & 0.9
NDA Northern delta Aquariids & 140.6 $^\circ$ & 345.6 $^\circ$ & 1.0 $^\circ$ & 39.2 & 2.2 & 0.96 & 22.0 $^\circ$ & 2.8
NOC Northern Daytime omega Cetids & 46.6 $^\circ$ & 5.7 $^\circ$ & 17.6 $^\circ$ & 34.9 & 1.3 & 0.91 & 38.1 $^\circ$ & 4.7
NOO November Orionids & 245.9 $^\circ$ & 89.1 $^\circ$ & 15.4 $^\circ$ & 43.1 & 11.2 & 0.99 & 36.2 $^\circ$ & 0.8
NZC Northern June Aquilids & 99.1 $^\circ$ & 308.4 $^\circ$ & -5.1 $^\circ$ & 37.8 & 1.7 & 0.93 & 38.8 $^\circ$ & 3.4
OCE Southern Daytime omega Cetids & 46.4 $^\circ$ & 20.7 $^\circ$ & -5.6 $^\circ$ & 36.7 & 1.6 & 0.92 & 35.1 $^\circ$ & 3.5
PAU Piscis Austrinids & 131.2 $^\circ$ & 350.2 $^\circ$ & -22.1 $^\circ$ & 44.0 & 4.4 & 0.97 & 58.6 $^\circ$ & 1.5
SDA Southern delta Aquariids & 126.8 $^\circ$ & 336.4 $^\circ$ & -16.1 $^\circ$ & 40.9 & 2.6 & 0.97 & 28.7 $^\circ$ & 2.3
SSE sigma Serpentids & 275.5 $^\circ$ & 243.5 $^\circ$ & -1.7 $^\circ$ & 43.5 & 2.7 & 0.93 & 62.1 $^\circ$ & 2.4
SZC Southern June Aquilids & 88.2 $^\circ$ & 307.3 $^\circ$ & -31.4 $^\circ$ & 37.0 & 1.4 & 0.93 & 41.9 $^\circ$ & 4.2
THA November theta Aurigids & 240.5 $^\circ$ & 92.3 $^\circ$ & 34.7 $^\circ$ & 33.1 & 1.1 & 0.89 & 26.4 $^\circ$ & 5.0
XRI Daytime xi Orionids & 123.7 $^\circ$ & 98.7 $^\circ$ & 15.9 $^\circ$ & 42.6 & 6.8 & 0.98 & 27.3 $^\circ$ & 1.1
ZCA Daytime zeta Cancrids & 153.5 $^\circ$ & 127.9 $^\circ$ & 15.3 $^\circ$ & 43.0 & 4.8 & 0.99 & 18.9 $^\circ$ & 1.4
|
resnet18.ipynb | ###Markdown
Feature Extraction with ResNet18In this notebook, we extract features by ResNet18. First, we use pre-trained ResNet18 provided by PyTorch (trained on ImageNet), and then we fine-tune all the weights with Fashion-MNIST datasets.We also provide codes for statistical analysis on extracted features. Pretrained ResNet18 (Fixed)
###Code
import os
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import transforms
from tqdm import tqdm
from torchvision.models import resnet18
import numpy as np
args = {
'learning_rate': 5e-2,
'batch_size': 16,
'num_worker': 4,
'random_seed': 8771795,
'augmentation': False,
'num_epoch': 20,
'cuda': '3'
}
# set device
os.environ["CUDA_VISIBLE_DEVICES"] = args['cuda']
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Set random seed
torch.random.manual_seed(args['random_seed'])
# Define transformation
test_transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.Grayscale(num_output_channels=3),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
train_valid_transform = test_transform
if args['augmentation']:
train_valid_transform = transforms.Compose([
# transforms.RandomResizedCrop((224, 224)),
transforms.Resize((224, 224)),
transforms.Grayscale(num_output_channels=3),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.RandomErasing(),
transforms.Normalize((0.5,), (0.5,))
])
# Load dataset
require_download = os.path.exists('./dataset')
train_valid_dataset = torchvision.datasets.FashionMNIST('./dataset', train=True, transform=train_valid_transform, download=True)
test_dataset = torchvision.datasets.FashionMNIST('./dataset', train=False, transform=test_transform, download=True)
# Split train and validation
torch.random.manual_seed(args['random_seed'])
train_dataset, valid_dataset = torch.utils.data.random_split(train_valid_dataset, [54000, 6000])
# Generate dataloader
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=args['batch_size'], shuffle=True, num_workers=args['num_worker'])
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=args['batch_size'], shuffle=False, num_workers=args['num_worker'])
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=args['batch_size'], shuffle=False, num_workers=args['num_worker'])
model = resnet18(pretrained=True)
class Identity(nn.Module):
def __init__(self):
super(Identity, self).__init__()
def forward(self, x):
return x
model.fc = Identity()
model.eval()
print('total parameters:', sum(p.numel() for p in model.parameters()))
if not os.path.exists('./features'):
os.mkdir('./features')
model = model.to(device)
train_feats, valid_feats, test_feats = [], [], []
train_labels, valid_labels, test_labels = [], [], []
for iteration, (x, y) in tqdm(enumerate(train_loader)):
x, y = x.to(device), y.to(device)
features = model(x)
train_feats.append(features.cpu().detach().numpy())
train_labels.append(y.cpu().detach().numpy())
del x, y, features
train_feats = np.concatenate(train_feats)
train_feats.shape
train_labels = np.concatenate(train_labels)
train_labels.shape
np.save('features/resnet18_train_feat.npy', train_feats)
np.save('features/resnet18_train_label.npy', train_labels)
for iteration, (x, y) in tqdm(enumerate(valid_loader)):
model.eval()
x, y = x.to(device), y.to(device)
features = model(x)
valid_feats.append(features.cpu().detach().numpy())
valid_labels.append(y.cpu().detach().numpy())
del x, y, features
valid_feats = np.concatenate(valid_feats)
valid_feats.shape
valid_labels = np.concatenate(valid_labels)
valid_labels.shape
np.save('features/resnet18_valid_feat.npy', valid_feats)
np.save('features/resnet18_valid_label.npy', valid_labels)
for iteration, (x, y) in tqdm(enumerate(test_loader)):
model.eval()
x, y = x.to(device), y.to(device)
features = model(x)
test_feats.append(features.cpu().detach().numpy())
test_labels.append(y.cpu().detach().numpy())
del x, y, features
test_feats = np.concatenate(test_feats)
test_feats.shape
test_labels = np.concatenate(test_labels)
test_labels.shape
np.save('features/resnet18_test_feat.npy', test_feats)
np.save('features/resnet18_test_label.npy', test_labels)
###Output
_____no_output_____
###Markdown
Pretrained ResNet18 (Finetune)
###Code
import os
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
from torchvision import transforms
from tqdm import tqdm
from torchvision.models import resnet18
import numpy as np
args = {
'learning_rate': 1e-3,
'batch_size': 64,
'num_worker': 16,
'random_seed': 8771795,
'augmentation': False,
'num_epoch': 10,
'cuda': '3'
}
# set device
os.environ["CUDA_VISIBLE_DEVICES"] = args['cuda']
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Set random seed
torch.random.manual_seed(args['random_seed'])
# Define transformation
test_transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.Grayscale(num_output_channels=3),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
train_valid_transform = test_transform
if args['augmentation']:
train_valid_transform = transforms.Compose([
# transforms.RandomResizedCrop((224, 224)),
transforms.Resize((224, 224)),
transforms.Grayscale(num_output_channels=3),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.RandomErasing(),
transforms.Normalize((0.5,), (0.5,))
])
# Load dataset
require_download = os.path.exists('./dataset')
train_valid_dataset = torchvision.datasets.FashionMNIST('./dataset', train=True, transform=train_valid_transform, download=True)
test_dataset = torchvision.datasets.FashionMNIST('./dataset', train=False, transform=test_transform, download=True)
# Split train and validation
torch.random.manual_seed(args['random_seed'])
train_dataset, valid_dataset = torch.utils.data.random_split(train_valid_dataset, [54000, 6000])
# Generate dataloader
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=args['batch_size'], shuffle=True, num_workers=args['num_worker'])
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=args['batch_size'], shuffle=False, num_workers=args['num_worker'])
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=args['batch_size'], shuffle=False, num_workers=args['num_worker'])
###Output
_____no_output_____
###Markdown
Train only the last FC layer
###Code
model = resnet18(pretrained=True)
for param in model.parameters():
param.require_grad = False
model.fc
model.fc = nn.Sequential(nn.Linear(in_features=512, out_features=64, bias=True),
nn.ReLU(),
nn.Linear(in_features=64, out_features=10, bias=True))
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=1e-4)
model.to(device)
for epoch in range(20):
total_train = 0
for iteration, (x, y) in tqdm(enumerate(train_loader)):
model.train()
x, y = x.to(device), y.to(device)
pred = model(x)
loss = criterion(pred, y)
total_train += loss.item()
loss.backward()
optimizer.step()
del x, y, pred
print('epoch {}: training loss: {}'.format(epoch+1, total_train/(iteration+1)))
torch.cuda.empty_cache()
val_acc = 0
for iteration, (x, y) in tqdm(enumerate(valid_loader)):
model.eval()
x, y = x.to(device), y.to(device)
out = model(x)
pred = torch.argmax(out, dim=1)
acc = sum(pred == y)
val_acc += acc.item()
print('epoch {}: valid acc: {}'.format(epoch+1, val_acc/6000))
test_acc = 0
for iteration, (x, y) in tqdm(enumerate(test_loader)):
model.eval()
x, y = x.to(device), y.to(device)
out = model(x)
pred = torch.argmax(out, dim=1)
acc = sum(pred == y)
test_acc += acc.item()
print('test acc: {}'.format(test_acc/10000))
test_acc = 0
for iteration, (x, y) in tqdm(enumerate(test_loader)):
model.eval()
x, y = x.to(device), y.to(device)
out = model(x)
pred = torch.argmax(out, dim=1)
acc = sum(pred == y)
test_acc += acc.item()
print('test acc: {}'.format(test_acc/10000))
torch.save(model.state_dict(), 'resnet18_before_2layers.pt')
###Output
_____no_output_____
###Markdown
Fine-tune all the layers
###Code
model = resnet18()
model.fc = nn.Sequential(nn.Linear(in_features=512, out_features=64, bias=True),
nn.ReLU(),
nn.Linear(in_features=64, out_features=10, bias=True))# nn.Linear(512, 10)
model.load_state_dict(torch.load('resnet18_before_2layers.pt'))
for param in model.parameters():
param.require_grad = True
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(filter(lambda p: p.requires_grad, model.parameters()), lr=2.5e-5)
model.to(device)
for epoch in range(10):
total_train = 0
for iteration, (x, y) in tqdm(enumerate(train_loader)):
model.train()
x, y = x.to(device), y.to(device)
pred = model(x)
loss = criterion(pred, y)
total_train += loss.item()
loss.backward()
optimizer.step()
del x, y, pred
print('epoch {}: training loss: {}'.format(epoch+1, total_train/(iteration+1)))
torch.cuda.empty_cache()
val_acc = 0
for iteration, (x, y) in tqdm(enumerate(valid_loader)):
model.eval()
x, y = x.to(device), y.to(device)
out = model(x)
pred = torch.argmax(out, dim=1)
acc = sum(pred == y)
val_acc += acc.item()
print('epoch {}: valid acc: {}'.format(epoch+1, val_acc/6000))
test_acc = 0
for iteration, (x, y) in tqdm(enumerate(test_loader)):
model.eval()
x, y = x.to(device), y.to(device)
out = model(x)
pred = torch.argmax(out, dim=1)
acc = sum(pred == y)
test_acc += acc.item()
print('test acc: {}'.format(test_acc/10000))
torch.save(model.state_dict(), 'resnet18_after_2layers.pt')
###Output
_____no_output_____
###Markdown
Feature Extraction & Compute F1
###Code
import os
import torch
import torch.nn as nn
from sklearn.metrics import precision_recall_curve, average_precision_score, f1_score
import matplotlib.pyplot as plt
import numpy as np
import torchvision
from torchvision.models import resnet18
from torchvision import transforms
from tqdm import tqdm
args = {
'learning_rate': 1e-3,
'batch_size': 32,
'num_worker': 4,
'random_seed': 8771795,
'augmentation': False,
'num_epoch': 10,
'cuda': '5'
}
# set device
os.environ["CUDA_VISIBLE_DEVICES"] = args['cuda']
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# Set random seed
torch.random.manual_seed(args['random_seed'])
# Define transformation
test_transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.Grayscale(num_output_channels=3),
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
train_valid_transform = test_transform
if args['augmentation']:
train_valid_transform = transforms.Compose([
# transforms.RandomResizedCrop((224, 224)),
transforms.Resize((224, 224)),
transforms.Grayscale(num_output_channels=3),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.RandomErasing(),
transforms.Normalize((0.5,), (0.5,))
])
# Load dataset
require_download = os.path.exists('./dataset')
train_valid_dataset = torchvision.datasets.FashionMNIST('./dataset', train=True, transform=train_valid_transform, download=True)
test_dataset = torchvision.datasets.FashionMNIST('./dataset', train=False, transform=test_transform, download=True)
# Split train and validation
torch.random.manual_seed(args['random_seed'])
train_dataset, valid_dataset = torch.utils.data.random_split(train_valid_dataset, [54000, 6000])
# Generate dataloader
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=args['batch_size'], shuffle=True, num_workers=args['num_worker'])
valid_loader = torch.utils.data.DataLoader(valid_dataset, batch_size=args['batch_size'], shuffle=False, num_workers=args['num_worker'])
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=args['batch_size'], shuffle=False, num_workers=args['num_worker'])
# pretrained ResNet18
model = resnet18()
# model.fc = nn.Linear(512, 10)
model.fc = nn.Sequential(nn.Linear(in_features=512, out_features=64, bias=True),
nn.ReLU(),
nn.Linear(in_features=64, out_features=10, bias=True))
model.load_state_dict(torch.load('resnet18_before_2layers.pt'))
model.to(device)
test_acc = 0
labels, preds = [], []
for iteration, (x, y) in tqdm(enumerate(test_loader)):
model.eval()
x, y = x.to(device), y.to(device)
out = model(x)
pred = torch.argmax(out, dim=1)
acc = sum(pred == y)
test_acc += acc.item()
labels.append(y.cpu().detach().numpy())
preds.append(pred.cpu().detach().numpy())
del out, pred, x, y
print('test acc: {}'.format(test_acc/10000))
labels = np.concatenate(labels)
preds = np.concatenate(preds)
assert labels.shape == preds.shape
from sklearn.metrics import precision_recall_fscore_support as score
from sklearn.metrics import recall_score, f1_score, precision_score, accuracy_score
from tabulate import tabulate
precision, recall, fscore, support = score(labels, preds)
pr = precision_score(labels, preds, average='macro')
re = recall_score(labels, preds, average='macro')
fs = f1_score(labels, preds, average='macro')
ac = accuracy_score(labels, preds)
table = [[i, p, r, f, s] for i, p, r, f, s in zip(range(10), precision, recall, fscore, support)]
table.append(['overall', pr, re, fs, 10000])
print(tabulate(table, headers=['class', 'precision', 'recall', 'f1', 'support']))
# pretrained ResNet18 + finetune
model = resnet18()
model.fc = nn.Sequential(nn.Linear(in_features=512, out_features=64, bias=True),
nn.ReLU(),
nn.Linear(in_features=64, out_features=10, bias=True))
model.load_state_dict(torch.load('resnet18_after_2layers.pt'))
model.to(device)
test_acc = 0
labels, preds = [], []
for iteration, (x, y) in tqdm(enumerate(test_loader)):
model.eval()
x, y = x.to(device), y.to(device)
out = model(x)
pred = torch.argmax(out, dim=1)
acc = sum(pred == y)
test_acc += acc.item()
labels.append(y.cpu().detach().numpy())
preds.append(pred.cpu().detach().numpy())
del out, pred, x, y
print('test acc: {}'.format(test_acc/10000))
labels = np.concatenate(labels)
preds = np.concatenate(preds)
assert labels.shape == preds.shape
precision, recall, fscore, support = score(labels, preds)
pr = precision_score(labels, preds, average='macro')
re = recall_score(labels, preds, average='macro')
fs = f1_score(labels, preds, average='macro')
ac = accuracy_score(labels, preds)
table = [[i, p, r, f, s] for i, p, r, f, s in zip(range(10), precision, recall, fscore, support)]
table.append(['overall', pr, re, fs, 10000])
print(tabulate(table, headers=['class', 'precision', 'recall', 'f1', 'support']))
# save labels & features
class Identity(nn.Module):
def __init__(self):
super(Identity, self).__init__()
def forward(self, x):
return x
model.fc = Identity()
train_feats, valid_feats, test_feats = [], [], []
train_labels, valid_labels, test_labels = [], [], []
for iteration, (x, y) in tqdm(enumerate(train_loader)):
x, y = x.to(device), y.to(device)
features = model(x)
train_feats.append(features.cpu().detach().numpy())
train_labels.append(y.cpu().detach().numpy())
del x, y, features
train_feats = np.concatenate(train_feats)
train_labels = np.concatenate(train_labels)
np.save('features/resnet18_finetune_2layers_train_feat.npy', train_feats)
np.save('features/resnet18_finetune_2layers_train_label.npy', train_labels)
del train_feats, train_labels
for iteration, (x, y) in tqdm(enumerate(valid_loader)):
x, y = x.to(device), y.to(device)
features = model(x)
valid_feats.append(features.cpu().detach().numpy())
valid_labels.append(y.cpu().detach().numpy())
del x, y, features
valid_feats = np.concatenate(valid_feats)
valid_labels = np.concatenate(valid_labels)
np.save('features/resnet18_finetune_2layers_valid_feat.npy', valid_feats)
np.save('features/resnet18_finetune_2layers_valid_label.npy', valid_labels)
del valid_feats, valid_labels
for iteration, (x, y) in tqdm(enumerate(test_loader)):
x, y = x.to(device), y.to(device)
features = model(x)
test_feats.append(features.cpu().detach().numpy())
test_labels.append(y.cpu().detach().numpy())
del x, y, features
test_feats = np.concatenate(test_feats)
test_labels = np.concatenate(test_labels)
np.save('features/resnet18_finetune_2layers_test_feat.npy', test_feats)
np.save('features/resnet18_finetune_2layers_test_label.npy', test_labels)
###Output
313it [00:08, 37.25it/s]
###Markdown
Stats
###Code
import numpy as np
import itertools
X_train = np.load('features/resnet18_train_feat.npy')
y_train = np.load('features/resnet18_train_label.npy')
X_valid = np.load('features/resnet18_valid_feat.npy')
y_valid = np.load('features/resnet18_valid_label.npy')
X_test = np.load('features/resnet18_test_feat.npy')
y_test = np.load('features/resnet18_test_label.npy')
# global mean
def global_mean(features):
return np.mean(features, axis=0)
train_global_mean = global_mean(X_train)
valid_global_mean = global_mean(X_valid)
test_global_mean = global_mean(X_test)
# class mean
def class_mean(features, labels):
means, nums = np.zeros((10, 512)), np.zeros((10, 512))
for feat, lab in zip(features, labels):
means[lab] += feat
nums[lab] += 1
means /= nums
return means
train_class_mean = class_mean(X_train, y_train)
valid_class_mean = class_mean(X_valid, y_valid)
test_class_mean = class_mean(X_test, y_test)
train_total_cov = np.cov(X_train.T)
valid_total_cov = np.cov(X_valid.T)
test_total_cov = np.cov(X_test.T)
# between class covariance
def between_class_cov(cl_mean, gl_mean):
bet_cov = np.zeros((512, 512))
for i in range(10):
bet_cov += np.dot(np.array([cl_mean[i] - gl_mean]).T,
np.array([cl_mean[i] - gl_mean]))
return bet_cov / 10
train_bet_cov = between_class_cov(train_class_mean, train_global_mean)
valid_bet_cov = between_class_cov(valid_class_mean, valid_global_mean)
test_bet_cov = between_class_cov(test_class_mean, test_global_mean)
# within class covariance
def within_class_cov(features, labels, cl_mean):
within_cov = np.zeros((512, 512))
for feature, label in zip(features, labels):
within_cov += np.dot(np.array([feature - cl_mean[label]]).T,
np.array([feature - cl_mean[label]]))
return within_cov / features.shape[0]
train_within_cov = within_class_cov(X_train, y_train, train_class_mean)
valid_within_cov = within_class_cov(X_valid, y_valid, valid_class_mean)
test_within_cov = within_class_cov(X_test, y_test, test_class_mean)
train_total_cov - (train_within_cov + train_bet_cov)
valid_total_cov - (valid_within_cov + valid_bet_cov)
test_total_cov - (test_within_cov + test_bet_cov)
print(np.max(train_total_cov - (train_within_cov + train_bet_cov)))
print(np.max(valid_total_cov - (valid_within_cov + valid_bet_cov)))
print(np.max(test_total_cov - (test_within_cov + test_bet_cov)))
contraction_train = np.trace(np.dot(train_within_cov, train_bet_cov)) / 10
contraction_valid = np.trace(np.dot(valid_within_cov, valid_bet_cov)) / 10
contraction_test = np.trace(np.dot(test_within_cov, test_bet_cov)) / 10
def closeness_equal_norms(cl_mean, gl_mean):
dist_array = np.zeros(10)
for i in range(10):
dist_array[i] = np.linalg.norm(cl_mean[i] - gl_mean)
return np.std(dist_array) / np.mean(dist_array)
closeness_equal_norms_train = closeness_equal_norms(train_class_mean, train_global_mean)
closeness_equal_norms_valid = closeness_equal_norms(valid_class_mean, valid_global_mean)
closeness_equal_norms_test = closeness_equal_norms(test_class_mean, test_global_mean)
# cosine similarity
def cos_sim(vA, vB):
return np.dot(vA, vB) / (np.sqrt(np.dot(vA,vA)) * np.sqrt(np.dot(vB,vB)))
cos_sim_list = []
for (c1, c2) in list(itertools.combinations(range(10), 2)):
cos_sim_list.append(cos_sim(train_class_mean[c1]-train_global_mean, train_class_mean[c2]-train_global_mean))
equal_angularity_train = np.std(cos_sim_list)
closeness_maximal_angle_train = np.mean(cos_sim_list + [1]*len(cos_sim_list)) /9
cos_sim_list = []
for (c1, c2) in list(itertools.combinations(range(10), 2)):
cos_sim_list.append(cos_sim(valid_class_mean[c1]-valid_global_mean, valid_class_mean[c2]-valid_global_mean))
equal_angularity_valid = np.std(cos_sim_list)
closeness_maximal_angle_valid = np.mean(cos_sim_list + [1]*len(cos_sim_list)) /9
cos_sim_list = []
for (c1, c2) in list(itertools.combinations(range(10), 2)):
cos_sim_list.append(cos_sim(test_class_mean[c1]-test_global_mean, test_class_mean[c2]-test_global_mean))
equal_angularity_test = np.std(cos_sim_list)
closeness_maximal_angle_test = np.mean(cos_sim_list + [1]*len(cos_sim_list)) /9
###Output
_____no_output_____
###Markdown
人脸识别比赛 数据集1、上传zip格式压缩的数据集压缩包2、运行`! unzip dataset.zip`3、按上面的方法上传test.zip4、将数据集放置在根目录下,与work目录同级
###Code
%matplotlib inline
import torch
import time
from torch import nn, optim
from torch.utils.data import Dataset, DataLoader
import torchvision
from torchvision.datasets import ImageFolder
from torchvision import transforms
from torchvision import models
import os
import sys
sys.path.append("..")
# import d2lzh_pytorch as d2l
device = torch.device('cuda:1' if torch.cuda.is_available() else 'cpu')
def load_data_face(batch_size):
transform = torchvision.transforms.Compose([
# torchvision.transforms.Grayscale(num_output_channels=1), # 彩色图像转灰度图像num_output_channels默认1
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.Resize([330,330]),
torchvision.transforms.CenterCrop([224, 224]),
torchvision.transforms.ToTensor()
])
train_imgs = torchvision.datasets.ImageFolder('./dataset/train', transform=transform)
test_imgs = torchvision.datasets.ImageFolder('./dataset/test', transform=transform)
train_iter = torch.utils.data.DataLoader(train_imgs, batch_size=batch_size, shuffle=True, num_workers=4)
test_iter = torch.utils.data.DataLoader(test_imgs, batch_size=batch_size, shuffle=False, num_workers=4)
return train_iter, test_iter
batch_size = 32
# 如出现“out of memory”的报错信息,可减小batch_size或resize
train_iter, test_iter = load_data_face(batch_size)
# 获取并保存人名和索引的对应关系,用于测试过程中将索引映射为人名
import pickle
# transform = torchvision.transforms.Compose([
# #torchvision.transforms.Grayscale(num_output_channels=1), # 彩色图像转灰度图像num_output_channels默认1
# # torchvision.transforms.Resize([224, 224]),
# torchvision.transforms.ToTensor()
# ])
transform = torchvision.transforms.Compose([
# tosrchvision.transforms.Grayscale(num_output_channels=1), # 彩色图像转灰度图像num_output_channels默认1
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.Resize([320,320]),
torchvision.transforms.CenterCrop([224, 224]),
torchvision.transforms.ToTensor()
])
test_imgs = torchvision.datasets.ImageFolder('dataset/test', transform=transform)
label = test_imgs.class_to_idx
label = {value:key for key, value in label.items()}
# print(len(label))
# 写入文件
label_hal = open('label.pkl', 'wb')
s = pickle.dumps(label)
label_hal.write(s)
label_hal.close()
net=models.resnet18(pretrained=True)
# net.fc = torch.nn.Linear(512, len(label))
net.fc = nn.Sequential(
nn.ReLU(),
nn.Dropout(0.6),
nn.Linear(512, len(label))
)
###Output
_____no_output_____
###Markdown
训练
###Code
def evaluate_accuracy(data_iter, net, device=None):
if device is None and isinstance(net, torch.nn.Module):
# 如果没指定device就使用net的device
device = list(net.parameters())[0].device
acc_sum, n = 0.0, 0
with torch.no_grad():
for X, y in data_iter:
if isinstance(net, torch.nn.Module):
net.eval() # 评估模式, 这会关闭dropout
acc_sum += (net(X.to(device)).argmax(dim=1) == y.to(device)).float().sum().cpu().item()
net.train() # 改回训练模式
else: # 自定义的模型, 3.13节之后不会用到, 不考虑GPU
if('is_training' in net.__code__.co_varnames): # 如果有is_training这个参数
# 将is_training设置成False
acc_sum += (net(X, is_training=False).argmax(dim=1) == y).float().sum().item()
else:
acc_sum += (net(X).argmax(dim=1) == y).float().sum().item()
n += y.shape[0]
return acc_sum / n
train_acc_list = []
test_acc_list = []
def train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs):
net = net.to(device)
print("training on ", device)
loss = torch.nn.CrossEntropyLoss()
for epoch in range(num_epochs):
train_l_sum, train_acc_sum, n, batch_count, start = 0.0, 0.0, 0, 0, time.time()
for X, y in train_iter:
X = X.to(device)
y = y.to(device)
y_hat = net(X)
l = loss(y_hat, y)
optimizer.zero_grad()
l.backward()
optimizer.step()
train_l_sum += l.cpu().item()
train_acc_sum += (y_hat.argmax(dim=1) == y).sum().cpu().item()
n += y.shape[0]
batch_count += 1
test_acc = evaluate_accuracy(test_iter, net)
train_acc_list.append(train_acc_sum / n)
test_acc_list.append(test_acc)
print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, time %.1f sec'
% (epoch + 1, train_l_sum / batch_count, train_acc_sum / n, test_acc, time.time() - start))
lr, num_epochs = 0.005, 100
# optimizer = torch.optim.Adam(net.parameters(), lr=lr)
optimizer = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9,weight_decay=3e-4)
train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)
net.fc = nn.Sequential(
nn.ReLU(),
nn.Dropout(0.65),
nn.Linear(512, len(label))
)
lr, num_epochs = 0.05, 100
# optimizer = torch.optim.Adam(net.parameters(), lr=lr)
optimizer = torch.optim.SGD(net.parameters(), lr=lr, momentum=0.9,weight_decay=1e-4)
train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)
# 自己加个画图
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
x = np.arange(1, len(train_acc_list)+1, 1)
y1 = np.array(train_acc_list)
y2 = np.array(test_acc_list)
plt.plot(x, y1, label="train")
plt.plot(x, y2, linestyle = "--", label="test")
plt.xlabel("x")
plt.ylabel("y")
plt.title('train & test')
plt.legend()
plt.show()
net.to(device)
###Output
_____no_output_____
###Markdown
保存模型
###Code
torch.save(net, './resnet18.pkl')
###Output
_____no_output_____
###Markdown
生成识别结果文件测试
###Code
# 读取训练好的模型
model = torch.load('./resnet18.pkl')
# 获取本次训练的人名和索引的对应关系
label = {}
with open('label.pkl','rb') as file:
label = pickle.loads(file.read())
# print(label)
# 测试集label对应关系
import pickle
label_answer = {}
with open('label_answer.pkl','rb') as file:
label_answer = pickle.loads(file.read())
label_answer = {value:key for key, value in label_answer.items()}
# 加载测试数据(在test目录下)
from PIL import Image
import numpy as np
transform = torchvision.transforms.Compose([
torchvision.transforms.Grayscale(num_output_channels=1), # 彩色图像转灰度图像num_output_channels默认1
torchvision.transforms.Resize([224, 224]),
torchvision.transforms.ToTensor()
])
# 生成测试结果文件
path = os.listdir('test')
r_d = {}
for f in path:
img = Image.open('test/' + f)
test_imgs = transform(img).unsqueeze(0)
test_imgs = test_imgs.to(device)
y = model(test_imgs)
pred = torch.argmax(y, dim = 1)
r = label_answer[label[int(pred)]]
r_d[int(f.strip('.jpg'))] = r
# 写入结果文件
r_d = sorted(r_d.items(), key=lambda a:a[0])
r_d = dict(r_d)
ret = open("result.csv","w")
for key, value in r_d.items():
print("%d,%s"%(key, value), file=ret)
ret.close()
# 根据生成识别文件的代码,自行编写main.py文件,要求文件可生成结果文件result.csv
# 已知的坑:main.py中需增加模型类的定义
# 测试main.py生成result.csv
!python main.py
# 生成后自行验证
###Output
1.0.0
0.2.1
cpu
3578.jpg
5109.jpg
1409.jpg
2896.jpg
3550.jpg
2869.jpg
2855.jpg
1384.jpg
1810.jpg
3791.jpg
3168.jpg
3154.jpg
3626.jpg
1970.jpg
2289.jpg
5294.jpg
4834.jpg
1794.jpg
1582.jpg
4388.jpg
5097.jpg
2920.jpg
1540.jpg
4410.jpg
1781.jpg
1971.jpg
229.jpg
375.jpg
407.jpg
3753.jpg
3035.jpg
1178.jpg
1420.jpg
2129.jpg
3551.jpg
2103.jpg
2665.jpg
1422.jpg
2467.jpg
1608.jpg
2507.jpg
4604.jpg
1026.jpg
5240.jpg
203.jpg
4823.jpg
2739.jpg
3427.jpg
5095.jpg
1595.jpg
3382.jpg
1580.jpg
3803.jpg
2274.jpg
2506.jpg
1153.jpg
2466.jpg
2314.jpg
2472.jpg
3591.jpg
1351.jpg
5137.jpg
3546.jpg
4567.jpg
3224.jpg
170.jpg
3754.jpg
4749.jpg
5319.jpg
3797.jpg
2270.jpg
5286.jpg
3807.jpg
1976.jpg
548.jpg
5053.jpg
2066.jpg
4833.jpg
2339.jpg
4748.jpg
4984.jpg
1624.jpg
2846.jpg
4576.jpg
3543.jpg
2887.jpg
826.jpg
2663.jpg
3569.jpg
5332.jpg
4038.jpg
1140.jpg
2475.jpg
3780.jpg
1183.jpg
588.jpg
577.jpg
2298.jpg
5291.jpg
3804.jpg
2071.jpg
946.jpg
1587.jpg
1592.jpg
4367.jpg
953.jpg
1551.jpg
1545.jpg
3434.jpg
5284.jpg
210.jpg
1035.jpg
3636.jpg
416.jpg
364.jpg
2306.jpg
2448.jpg
600.jpg
2662.jpg
5131.jpg
2611.jpg
3527.jpg
883.jpg
3296.jpg
3719.jpg
2349.jpg
5383.jpg
4664.jpg
4843.jpg
2598.jpg
2942.jpg
712.jpg
1286.jpg
5037.jpg
4473.jpg
3308.jpg
1084.jpg
510.jpg
5235.jpg
5209.jpg
289.jpg
4705.jpg
2823.jpg
4249.jpg
2610.jpg
2176.jpg
17.jpg
4263.jpg
2612.jpg
2835.jpg
102.jpg
4707.jpg
4075.jpg
3040.jpg
328.jpg
2389.jpg
4673.jpg
3685.jpg
506.jpg
4840.jpg
512.jpg
2982.jpg
1521.jpg
2969.jpg
1284.jpg
1087.jpg
507.jpg
4128.jpg
4100.jpg
5381.jpg
3914.jpg
4048.jpg
3727.jpg
3294.jpg
842.jpg
5168.jpg
2161.jpg
2159.jpg
1336.jpg
846.jpg
891.jpg
885.jpg
5352.jpg
2401.jpg
5408.jpg
477.jpg
3910.jpg
3904.jpg
5226.jpg
4110.jpg
265.jpg
1929.jpg
4851.jpg
4879.jpg
2993.jpg
3483.jpg
5019.jpg
3440.jpg
4111.jpg
2548.jpg
5233.jpg
1690.jpg
4930.jpg
5409.jpg
4703.jpg
5151.jpg
3520.jpg
5145.jpg
3536.jpg
2600.jpg
845.jpg
851.jpg
104.jpg
2833.jpg
1645.jpg
5379.jpg
4067.jpg
5392.jpg
4098.jpg
312.jpg
3091.jpg
1876.jpg
4885.jpg
1057.jpg
5231.jpg
272.jpg
4852.jpg
1902.jpg
5033.jpg
931.jpg
4477.jpg
1269.jpg
3494.jpg
2952.jpg
2985.jpg
3872.jpg
1730.jpg
3090.jpg
2417.jpg
2615.jpg
5152.jpg
1334.jpg
4255.jpg
2624.jpg
4296.jpg
2803.jpg
4057.jpg
4731.jpg
4725.jpg
2368.jpg
2383.jpg
3894.jpg
4645.jpg
1715.jpg
3843.jpg
530.jpg
2036.jpg
2778.jpg
4484.jpg
3301.jpg
1502.jpg
928.jpg
2745.jpg
2751.jpg
3665.jpg
3936.jpg
1106.jpg
3705.jpg
3063.jpg
4724.jpg
5162.jpg
848.jpg
684.jpg
2625.jpg
4254.jpg
20.jpg
4518.jpg
5160.jpg
5174.jpg
4726.jpg
1689.jpg
321.jpg
1070.jpg
3459.jpg
3303.jpg
1500.jpg
1267.jpg
3316.jpg
5029.jpg
268.jpg
2578.jpg
283.jpg
4647.jpg
3882.jpg
3706.jpg
4055.jpg
2356.jpg
2183.jpg
5161.jpg
2632.jpg
1465.jpg
3266.jpg
3528.jpg
4521.jpg
2144.jpg
27.jpg
2805.jpg
5373.jpg
2434.jpg
3104.jpg
1075.jpg
3676.jpg
4119.jpg
2554.jpg
1908.jpg
250.jpg
735.jpg
4454.jpg
3313.jpg
5010.jpg
1510.jpg
3307.jpg
3844.jpg
537.jpg
4118.jpg
1706.jpg
1712.jpg
1869.jpg
3059.jpg
5372.jpg
494.jpg
2838.jpg
2810.jpg
5158.jpg
682.jpg
3503.jpg
1314.jpg
3259.jpg
125.jpg
5416.jpg
5358.jpg
3729.jpg
327.jpg
2386.jpg
3649.jpg
4132.jpg
3885.jpg
509.jpg
2033.jpg
2967.jpg
5013.jpg
4319.jpg
4872.jpg
252.jpg
2542.jpg
4669.jpg
3927.jpg
454.jpg
440.jpg
3728.jpg
1881.jpg
4721.jpg
5198.jpg
2813.jpg
816.jpg
3217.jpg
95.jpg
157.jpg
5316.jpg
3767.jpg
2519.jpg
5276.jpg
3834.jpg
4829.jpg
2733.jpg
1560.jpg
2901.jpg
2915.jpg
787.jpg
3404.jpg
2040.jpg
3438.jpg
4343.jpg
2732.jpg
546.jpg
4828.jpg
432.jpg
1818.jpg
3941.jpg
3799.jpg
3772.jpg
1617.jpg
4021.jpg
3982.jpg
4592.jpg
3216.jpg
195.jpg
3202.jpg
1429.jpg
4221.jpg
2650.jpg
3214.jpg
168.jpg
4751.jpg
5315.jpg
3002.jpg
4786.jpg
1832.jpg
4143.jpg
1985.jpg
587.jpg
1952.jpg
222.jpg
236.jpg
1577.jpg
2080.jpg
784.jpg
2043.jpg
4195.jpg
2484.jpg
4787.jpg
3956.jpg
5314.jpg
394.jpg
4585.jpg
196.jpg
810.jpg
2899.jpg
4230.jpg
2682.jpg
1162.jpg
5310.jpg
4998.jpg
4754.jpg
3985.jpg
1189.jpg
582.jpg
3173.jpg
2251.jpg
4620.jpg
3629.jpg
1957.jpg
4185.jpg
1572.jpg
780.jpg
1200.jpg
2721.jpg
2912.jpg
1599.jpg
971.jpg
3358.jpg
2708.jpg
3402.jpg
540.jpg
226.jpg
4621.jpg
2536.jpg
2244.jpg
1765.jpg
4609.jpg
4755.jpg
3748.jpg
1639.jpg
3238.jpg
4557.jpg
3576.jpg
4569.jpg
3574.jpg
191.jpg
84.jpg
4582.jpg
4019.jpg
4970.jpg
1834.jpg
4780.jpg
1820.jpg
1015.jpg
556.jpg
224.jpg
4810.jpg
1954.jpg
4192.jpg
2093.jpg
768.jpg
4346.jpg
5058.jpg
3830.jpg
4636.jpg
1982.jpg
5266.jpg
345.jpg
4971.jpg
2333.jpg
1148.jpg
3039.jpg
147.jpg
621.jpg
2864.jpg
46.jpg
5104.jpg
finish
###Markdown
###Code
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import transforms
import time
from tqdm import tqdm
from tqdm import tqdm_notebook
%matplotlib inline
class NormalBlock(nn.Module):
vol_expansion = 1
def __init__(self, in_channels, out_channels, stride=1):
super(NormalBlock, self).__init__()
self.conv_layer1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False)
self.batch_norm_1 = nn.BatchNorm2d(out_channels)
self.conv_layer2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1, bias=False)
self.batch_norm_2 = nn.BatchNorm2d(out_channels)
self.skip_layer = nn.Sequential()
if stride != 1 or in_channels != self.vol_expansion*out_channels:
self.skip_layer = nn.Sequential(
nn.Conv2d(in_channels, self.vol_expansion * out_channels, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.vol_expansion * out_channels)
)
def forward(self, x):
out = F.relu(self.batch_norm_1(self.conv_layer1(x)))
out = self.batch_norm_2(self.conv_layer2(out))
out += self.skip_layer(x)
out = F.relu(out)
return out
class BottleneckBlock(nn.Module):
vol_expansion = 4
def __init__(self, in_channels, out_channels, stride=1):
super(BottleneckBlock, self).__init__()
self.conv_layer1 = nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=False)
self.batch_norm_1 = nn.BatchNorm2d(out_channels)
self.conv_layer2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False)
self.batch_norm_2 = nn.BatchNorm2d(out_channels)
self.conv_layer3 = nn.Conv2d(out_channels, self.vol_expansion * out_channels, kernel_size=1, bias=False)
self.batch_norm_3 = nn.BatchNorm2d(self.vol_expansion * out_channels)
self.skip_layers = nn.Sequential()
if stride != 1 or in_channels != self.vol_expansion * out_channels:
self.skip_layers = nn.Sequential(
nn.Conv2d(in_channels, self.vol_expansion * out_channels, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.vol_expansion * out_channels)
)
def forward(self, x):
out = F.relu(self.batch_norm_1(self.conv_layer1(x)))
out = F.relu(self.batch_norm_2(self.conv_layer2(out)))
out = self.batch_norm_3(self.conv_layer3(out))
out += self.skip_layers(x)
out = F.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=10):
super(ResNet, self).__init__()
self.in_channels = 64
self.conv_layer1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
self.linear = nn.Linear(512*block.vol_expansion, num_classes)
def _make_layer(self, block, out_channels, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_channels, out_channels, stride))
self.in_channels = out_channels * block.vol_expansion
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv_layer1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
def ResNet18():
return ResNet(NormalBlock, [2,2,2,2])
def ResNet34():
return ResNet(NormalBlock, [3,4,6,3])
def ResNet50():
return ResNet(BottleneckBlock, [3,4,6,3])
def ResNet101():
return ResNet(BottleneckBlock, [3,4,23,3])
def ResNet152():
return ResNet(BottleneckBlock, [3,8,36,3])
# training data transformation
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4821, 0.4465), (0.2470,0.2435,0.2616))])
# training data loader
train_set = torchvision.datasets.CIFAR10(root='./data',
train=True,
download=True,
transform=transform_train)
train_loader = torch.utils.data.DataLoader(dataset=train_set,
batch_size=32,
shuffle=True,
num_workers=2)
# test data transformation
transform_test = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4821, 0.4465), (0.2470, 0.2435,0.2616))])
# test data loader
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True,
transform=transform_test)
test_loader = torch.utils.data.DataLoader(dataset=testset,
batch_size=32,
shuffle=False,
num_workers=2)
for i, (inputs, labels) in enumerate(train_loader):
print(inputs.shape)
def train_model(model, loss_function, optimizer, data_loader):
# set model to training mode
model.train()
current_loss = 0.0
current_acc = 0
# iterate over the training data
for i, (inputs, labels) in enumerate(data_loader):
# send the input/labels to the GPU
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
with torch.set_grad_enabled(True):
# forward
outputs = model(inputs)
_, predictions = torch.max(outputs, 1)
loss = loss_function(outputs, labels)
# backward
loss.backward()
optimizer.step()
# statistics
current_loss += loss.item() * inputs.size(0)
current_acc += torch.sum(predictions == labels.data)
total_loss = current_loss / len(data_loader.dataset)
total_acc = current_acc.double() / len(data_loader.dataset)
print('Train Loss: {:.4f}; Accuracy: {:.4f}'.format(total_loss,
total_acc))
return total_loss, total_acc
def test_model(model, loss_function, data_loader):
# set model in evaluation mode
model.eval()
current_loss = 0.0
current_acc = 0
# iterate over the validation data
for i, (inputs, labels) in enumerate(data_loader):
# send the input/labels to the GPU
inputs = inputs.to(device)
labels = labels.to(device)
# forward
with torch.set_grad_enabled(False):
outputs = model(inputs)
_, predictions = torch.max(outputs, 1)
loss = loss_function(outputs, labels)
# statistics
current_loss += loss.item() * inputs.size(0)
current_acc += torch.sum(predictions == labels.data)
total_loss = current_loss / len(data_loader.dataset)
total_acc = current_acc.double() / len(data_loader.dataset)
print('Test Loss: {:.4f}; Accuracy: {:.4f}'.format(total_loss,total_acc))
return total_loss, total_acc
def plot_accuracy(test_accuracy: list,train_accuracy,model_name,ep):
"""Plot accuracy"""
plt.figure()
x = range(1,ep+1)
plt.plot(x,test_accuracy,color='b',label='Test')
plt.plot(x,train_accuracy,color='r',label='Train')
plt.title(model_name)
# plt.xticks(
# [i for i in range(0, len(accuracy))],
# [i + 1 for i in range(0, len(accuracy))])
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend()
plt.show()
# plt.savefig('{}.png'.format(model_name))
if __name__ == '__main__':
modelname_list = ['ResNet18','ResNet34','ResNet50','ResNet101','ResNet152']
models_list = [ResNet18(),ResNet34(),ResNet50(),ResNet101(),ResNet152()]
for i in range(len(models_list[:1])):
start_time = time.time()
name = modelname_list[i]
print("Model:",name)
model = models_list[i]
# select gpu 0, if available# otherwise fallback to cpu
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# transfer the model to the GPU
model = model.to(device)
# loss function
loss_function = nn.CrossEntropyLoss()
# We'll optimize all parameters
optimizer = optim.Adam(model.parameters())
EPOCHS = 50
# with tqdm(total=EPOCHS) as pbar:
test_acc,train_acc = [],[] # collect accuracy for plotting
for epoch in range(EPOCHS):
print('Epoch {}/{}'.format(epoch + 1, EPOCHS))
train_loss,train_accuracy = train_model(model, loss_function, optimizer, train_loader)
test_loss, test_accuracy = test_model(model, loss_function, test_loader)
train_acc.append(train_accuracy)
test_acc.append(test_accuracy)
# pbar.update(1)
# pbar.close()
# torch.save(model, PATH)
endtime = time.time() - start_time
print("Endtime %s seconds",endtime)
plot_accuracy(test_acc,train_acc,name,EPOCHS)
###Output
_____no_output_____
###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
import matplotlib.pyplot as plt
%matplotlib inline
import cv2
from PIL import Image
def read_img(file_path):
img_arr = cv2.imread(file_path)
return cv2.cvtColor(img_arr, cv2.COLOR_BGR2RGB)
dataset_path = '/content/drive/MyDrive/img_temp'
import os
import glob
import torch, torchvision
from torchvision import transforms
from torch.utils.data import Dataset, DataLoader
from PIL import Image, ImageOps
class FaceDataset(Dataset):
def __init__(self, dir_path, mode, transform=None):
"""
dir_path : Path to the directory.
mode: train / test
transform (callable, optional): Optional transform to be applied on a sample.
"""
self.all_data = sorted(glob.glob(os.path.join(dir_path, mode, '*', '*')))
self.transform = transform
def __len__(self):
return len(self.all_data)
def __getitem__(self, idx):
if torch.is_tensor(idx): # 인덱스가 tensor 형태일 수 있으니 리스트 형태로 바꿔준다.
idx = idx.tolist()
data_path = self.all_data[idx]
#img = Image.open(data_path)
#fixed_img = ImageOps.exif_transpose(img)
#albumentations 이용하려면 cv2 라이브러리 사용
img = cv2.imread(data_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# transform 적용
if self.transform:
augmented = self.transform(image=img)
img = augmented['image']
# 이미지 이름을 활용해 label 부여
# ['anger', 'anxiety', 'delight', 'hurt', 'neutrality', 'panic', 'sad']
if (os.path.basename(data_path).startswith("train_anger") == True) or (os.path.basename(data_path).startswith("test_anger") == True):
label = 0
elif (os.path.basename(data_path).startswith("train_anxiety") == True) or (os.path.basename(data_path).startswith("test_anxiety") == True):
label = 1
elif (os.path.basename(data_path).startswith("train_delight") == True) or (os.path.basename(data_path).startswith("test_delight") == True):
label = 2
elif (os.path.basename(data_path).startswith("train_hurt") == True) or (os.path.basename(data_path).startswith("test_hurt") == True):
label = 3
elif (os.path.basename(data_path).startswith("train_neutrality") == True) or (os.path.basename(data_path).startswith("test_neutrality") == True):
label = 4
elif (os.path.basename(data_path).startswith("train_panic") == True) or (os.path.basename(data_path).startswith("test_panic") == True):
label = 5
else:
label = 6
return img, label
import albumentations
import albumentations.pytorch
albumentations_trans = albumentations.Compose([
albumentations.Resize(32,32),
albumentations.pytorch.transforms.ToTensor()
])
train_set = FaceDataset(dataset_path, 'train', transform=albumentations_trans)
test_set = FaceDataset(dataset_path, 'test', transform=albumentations_trans)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=128, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=100, shuffle=False)
print(len(train_set))
print(test_set.__len__())
train_set.__getitem__(3)
train_set.__getitem__(3)[0].shape
# 데이터 클래스를 거쳐 나온 이미지 데이터는 텐서 형태여서 시각화를 위해 [height, width, channels] 형태로 배열을 다시 바꿔줘야함
def tensor_img(img):
img = img.permute(1,2,0)
plt.imshow(img)
tensor_img(train_set.__getitem__(3)[0])
'''ResNet in PyTorch.
For Pre-activation ResNet, see 'preact_resnet.py'.
Reference:
[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Deep Residual Learning for Image Recognition. arXiv:1512.03385
'''
import torch
import torch.nn as nn
import torch.nn.functional as F
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(
in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, in_planes, planes, stride=1):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3,
stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, self.expansion *
planes, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(self.expansion*planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes,
kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
out = self.bn3(self.conv3(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=7):
super(ResNet, self).__init__()
self.in_planes = 64
self.conv1 = nn.Conv2d(3, 64, kernel_size=3,
stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
self.linear = nn.Linear(512*block.expansion, num_classes)
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
def ResNet18():
return ResNet(BasicBlock, [2, 2, 2, 2])
def ResNet34():
return ResNet(BasicBlock, [3, 4, 6, 3])
def ResNet50():
return ResNet(Bottleneck, [3, 4, 6, 3])
def ResNet101():
return ResNet(Bottleneck, [3, 4, 23, 3])
def ResNet152():
return ResNet(Bottleneck, [3, 8, 36, 3])
def test():
net = ResNet18()
y = net(torch.randn(1, 3, 32, 32))
print(y.size())
# test()
# progress bar code
import os
import sys
import time
import math
import torch.nn as nn
import torch.nn.init as init
import shutil
def get_mean_and_std(dataset):
'''Compute the mean and std value of dataset.'''
dataloader = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=True, num_workers=2)
mean = torch.zeros(3)
std = torch.zeros(3)
print('==> Computing mean and std..')
for inputs, targets in dataloader:
for i in range(3):
mean[i] += inputs[:,i,:,:].mean()
std[i] += inputs[:,i,:,:].std()
mean.div_(len(dataset))
std.div_(len(dataset))
return mean, std
def init_params(net):
'''Init layer parameters.'''
for m in net.modules():
if isinstance(m, nn.Conv2d):
init.kaiming_normal(m.weight, mode='fan_out')
if m.bias:
init.constant(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
init.constant(m.weight, 1)
init.constant(m.bias, 0)
elif isinstance(m, nn.Linear):
init.normal(m.weight, std=1e-3)
if m.bias:
init.constant(m.bias, 0)
_, term_width = shutil.get_terminal_size()
term_width = int(term_width)
TOTAL_BAR_LENGTH = 65.
last_time = time.time()
begin_time = last_time
def progress_bar(current, total, msg=None):
global last_time, begin_time
if current == 0:
begin_time = time.time() # Reset for new bar.
cur_len = int(TOTAL_BAR_LENGTH*current/total)
rest_len = int(TOTAL_BAR_LENGTH - cur_len) - 1
sys.stdout.write(' [')
for i in range(cur_len):
sys.stdout.write('=')
sys.stdout.write('>')
for i in range(rest_len):
sys.stdout.write('.')
sys.stdout.write(']')
cur_time = time.time()
step_time = cur_time - last_time
last_time = cur_time
tot_time = cur_time - begin_time
L = []
L.append(' Step: %s' % format_time(step_time))
L.append(' | Tot: %s' % format_time(tot_time))
if msg:
L.append(' | ' + msg)
msg = ''.join(L)
sys.stdout.write(msg)
for i in range(term_width-int(TOTAL_BAR_LENGTH)-len(msg)-3):
sys.stdout.write(' ')
# Go back to the center of the bar.
for i in range(term_width-int(TOTAL_BAR_LENGTH/2)+2):
sys.stdout.write('\b')
sys.stdout.write(' %d/%d ' % (current+1, total))
if current < total-1:
sys.stdout.write('\r')
else:
sys.stdout.write('\n')
sys.stdout.flush()
def format_time(seconds):
days = int(seconds / 3600/24)
seconds = seconds - days*3600*24
hours = int(seconds / 3600)
seconds = seconds - hours*3600
minutes = int(seconds / 60)
seconds = seconds - minutes*60
secondsf = int(seconds)
seconds = seconds - secondsf
millis = int(seconds*1000)
f = ''
i = 1
if days > 0:
f += str(days) + 'D'
i += 1
if hours > 0 and i <= 2:
f += str(hours) + 'h'
i += 1
if minutes > 0 and i <= 2:
f += str(minutes) + 'm'
i += 1
if secondsf > 0 and i <= 2:
f += str(secondsf) + 's'
i += 1
if millis > 0 and i <= 2:
f += str(millis) + 'ms'
i += 1
if f == '':
f = '0ms'
return f
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
import os
import argparse
"""
parser = argparse.ArgumentParser(description='PyTorch CIFAR10 Training')
parser.add_argument('--lr', default=0.1, type=float, help='learning rate')
parser.add_argument('--resume', '-r', action='store_true',
help='resume from checkpoint')
args = parser.parse_args()
"""
device = 'cuda' if torch.cuda.is_available() else 'cpu'
best_acc = 0 # best test accuracy
start_epoch = 0 # start from epoch 0 or last checkpoint epoch
# Data
# print('==> Preparing data..')
# transform_train = transforms.Compose([
# transforms.RandomCrop(32, padding=4),
# transforms.RandomHorizontalFlip(),
# transforms.ToTensor(),
# transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
# ])
# transform_test = transforms.Compose([
# transforms.ToTensor(),
# transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
# ])
# trainset = torchvision.datasets.CIFAR10(
# root='./data', train=True, download=True, transform=transform_train)
# trainloader = torch.utils.data.DataLoader(
# trainset, batch_size=128, shuffle=True, num_workers=2)
# testset = torchvision.datasets.CIFAR10(
# root='./data', train=False, download=True, transform=transform_test)
# testloader = torch.utils.data.DataLoader(
# testset, batch_size=100, shuffle=False, num_workers=2)
# classes = ('plane', 'car', 'bird', 'cat', 'deer',
# 'dog', 'frog', 'horse', 'ship', 'truck')
# Model
print('==> Building model..')
# net = VGG('VGG19')
# net = ResNet18()
# net = PreActResNet18()
# net = GoogLeNet()
# net = DenseNet121()
# net = ResNeXt29_2x64d()
# net = MobileNet()
# net = MobileNetV2()
# net = DPN92()
# net = ShuffleNetG2()
# net = SENet18()
# net = ShuffleNetV2(1)
# net = EfficientNetB0()
# net = RegNetX_200MF()
net = ResNet18()
net = net.to(device)
if device == 'cuda':
net = torch.nn.DataParallel(net)
cudnn.benchmark = True
"""
if args.resume:
# Load checkpoint.
print('==> Resuming from checkpoint..')
assert os.path.isdir('checkpoint'), 'Error: no checkpoint directory found!'
checkpoint = torch.load('./checkpoint/ckpt.pth')
net.load_state_dict(checkpoint['net'])
best_acc = checkpoint['acc']
start_epoch = checkpoint['epoch']
"""
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.1,
momentum=0.9, weight_decay=5e-4)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=200)
# Training
def train(epoch):
print('\nEpoch: %d' % epoch)
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(train_loader):
inputs, targets = inputs.to(device), targets.to(device)
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
progress_bar(batch_idx, len(train_loader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
% (train_loss/(batch_idx+1), 100.*correct/total, correct, total))
def test(epoch):
global best_acc
net.eval()
test_loss = 0
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(test_loader):
inputs, targets = inputs.to(device), targets.to(device)
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
print(targets, predicted)
progress_bar(batch_idx, len(test_loader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
% (test_loss/(batch_idx+1), 100.*correct/total, correct, total))
# Save checkpoint.
acc = 100.*correct/total
if acc > best_acc:
print('Saving..')
state = {
'net': net.state_dict(),
'acc': acc,
'epoch': epoch,
}
if not os.path.isdir('checkpoint'):
os.mkdir('checkpoint')
torch.save(state, './checkpoint/ckpt.pth')
best_acc = acc
for epoch in range(start_epoch, start_epoch+100):
train(epoch)
test(epoch)
scheduler.step()
###Output
[1;30;43m스트리밍 출력 내용이 길어서 마지막 5000줄이 삭제되었습니다.[0m
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 1, 5, 1, 6, 1, 5, 6, 3, 2, 4, 1, 1, 1, 4, 4, 1, 1, 5, 4, 2, 2, 1,
5, 5, 4, 3, 1, 4, 1, 1, 4, 1, 3, 1, 1, 4, 4, 4, 1, 1, 1, 1, 5, 4, 1, 1,
1, 1, 5, 6, 3, 1, 1, 1, 1, 4, 1, 5, 1, 1, 2, 1, 6, 6, 6, 1, 6, 0, 1, 1,
3, 0, 1, 1, 6, 4, 1, 5, 1, 1, 1, 2, 3, 1, 1, 4, 4, 1, 0, 1, 1, 6, 5, 1,
5, 1, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 4, 6, 4, 1, 4, 1, 1, 1, 4, 6, 4, 4, 4, 6, 3, 4, 5, 1, 6, 6, 1, 4, 6,
1, 4, 1, 3, 6, 4, 4, 6, 4, 4, 4, 4, 4, 6, 4, 2, 1, 3, 6, 1, 4, 1, 3, 1,
4, 4, 1, 1, 6, 4, 4, 1, 1, 4, 0, 1, 1, 1, 1, 4, 4, 3, 4, 4, 6, 4, 4, 3,
3, 1, 4, 4, 4, 4, 6, 5, 3, 4, 1, 4, 4, 1, 1, 1, 6, 0, 1, 4, 1, 4, 1, 4,
5, 4, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 1, 1, 3, 1, 1, 5, 1, 1, 5, 5, 5, 2, 1, 3, 0, 0, 0, 5, 5, 6, 1, 5,
1, 5, 5, 1, 5, 0, 1, 6, 6, 2, 0, 1, 1, 5, 5, 5, 5, 3, 4, 5, 3, 5, 5, 1,
5, 5, 1, 5, 1, 1, 5, 5, 1, 3, 4, 1, 3, 5, 2, 3, 5, 1, 1, 4, 1, 1, 5, 5,
5, 1, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 1, 1, 1, 3, 1, 5, 5, 5, 5, 5, 4, 5,
1, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([1, 4, 1, 1, 2, 1, 1, 2, 1, 1, 1, 6, 3, 1, 2, 0, 3, 0, 1, 6, 4, 1, 1, 6,
2, 6, 2, 4, 3, 0, 1, 2, 0, 2, 1, 1, 5, 5, 0, 4, 6, 1, 6, 0, 6, 1, 1, 3,
3, 0, 2, 1, 1, 6, 0, 0, 1, 6, 4, 2, 3, 6, 1, 0, 0, 3, 1, 4, 1, 1, 5, 6,
1, 1, 1, 6, 2, 6, 6, 3, 6, 4, 1, 2, 1, 0, 0, 5, 0, 1, 3, 3, 2, 6, 2, 4,
1, 3, 1, 6], device='cuda:0')
[=======================================================>.........] Step: 1s681ms | Tot: 10s154ms | Loss: 2.740 | Acc: 36.143% (253/700) 7/7
Epoch: 26
[==============================================================>..] Step: 771ms | Tot: 57s985ms | Loss: 0.453 | Acc: 83.857% (2935/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([1, 0, 0, 3, 1, 0, 0, 0, 4, 2, 2, 3, 0, 0, 0, 4, 2, 0, 2, 1, 3, 5, 0, 3,
1, 1, 4, 2, 0, 2, 1, 0, 2, 0, 4, 2, 4, 2, 0, 4, 2, 2, 4, 3, 3, 1, 0, 5,
1, 1, 2, 0, 2, 0, 1, 4, 4, 2, 0, 4, 0, 3, 2, 1, 2, 3, 3, 2, 0, 0, 3, 0,
2, 2, 0, 2, 1, 4, 2, 0, 0, 0, 2, 1, 0, 1, 2, 3, 3, 1, 0, 2, 2, 4, 1, 0,
2, 1, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 4, 2, 4, 3, 3, 4, 3, 1, 4, 4, 1, 4, 5, 1, 0, 1, 4, 1, 4, 2, 0, 0, 0,
4, 4, 0, 3, 3, 6, 0, 2, 1, 0, 0, 6, 0, 4, 2, 1, 0, 0, 4, 0, 0, 3, 3, 3,
2, 2, 3, 1, 1, 0, 3, 0, 1, 2, 2, 2, 0, 5, 1, 4, 2, 2, 4, 3, 3, 2, 2, 5,
1, 2, 2, 1, 3, 3, 2, 0, 0, 4, 1, 0, 4, 4, 3, 4, 2, 1, 6, 4, 2, 0, 0, 3,
4, 2, 4, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 1,
1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5, 2, 2, 2,
2, 2, 2, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 3, 3, 4, 3, 3, 5, 2, 2, 2, 4, 4, 2, 1, 4, 0, 1, 5, 3, 4, 2, 2, 2,
1, 5, 4, 1, 1, 4, 3, 5, 4, 4, 4, 0, 1, 4, 5, 4, 4, 5, 1, 3, 1, 2, 3, 1,
1, 4, 5, 6, 3, 1, 3, 4, 1, 4, 4, 0, 0, 1, 2, 1, 3, 2, 0, 4, 6, 5, 3, 3,
3, 0, 1, 4, 4, 4, 1, 1, 0, 2, 4, 2, 3, 0, 0, 4, 4, 1, 4, 3, 1, 4, 0, 1,
2, 0, 6, 0], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 4, 4, 4, 4, 4, 4, 1, 4, 4, 0, 4, 4, 4, 4, 3, 3, 4, 1, 3, 0, 1, 2, 4,
2, 4, 4, 3, 1, 4, 4, 6, 4, 4, 4, 4, 4, 4, 4, 2, 2, 0, 4, 1, 4, 1, 4, 1,
0, 4, 1, 4, 6, 4, 4, 4, 4, 4, 4, 2, 0, 0, 5, 4, 6, 3, 4, 4, 4, 4, 4, 3,
2, 3, 4, 4, 4, 4, 6, 4, 3, 0, 1, 4, 4, 2, 3, 4, 3, 4, 3, 4, 1, 4, 2, 4,
2, 4, 4, 4], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([0, 0, 2, 0, 2, 1, 1, 3, 1, 3, 5, 0, 5, 2, 4, 3, 0, 0, 2, 1, 5, 0, 5, 1,
2, 0, 5, 0, 0, 0, 1, 0, 5, 2, 2, 3, 1, 3, 1, 0, 5, 1, 3, 4, 3, 4, 4, 0,
5, 3, 1, 3, 0, 1, 0, 5, 4, 3, 4, 0, 3, 5, 2, 3, 5, 2, 1, 0, 3, 1, 5, 3,
5, 1, 4, 5, 2, 4, 1, 2, 4, 5, 3, 1, 4, 1, 0, 1, 0, 0, 1, 5, 4, 2, 2, 2,
2, 1, 0, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([1, 4, 1, 1, 2, 1, 3, 0, 3, 2, 1, 4, 3, 1, 2, 0, 1, 0, 1, 2, 4, 1, 3, 3,
2, 4, 2, 4, 0, 0, 3, 2, 2, 2, 4, 2, 2, 2, 3, 4, 6, 1, 2, 0, 3, 2, 4, 4,
1, 2, 2, 2, 0, 2, 0, 0, 1, 6, 5, 2, 4, 3, 1, 0, 2, 1, 1, 2, 1, 1, 4, 2,
4, 1, 2, 3, 2, 6, 4, 2, 6, 2, 2, 2, 0, 2, 0, 1, 0, 3, 3, 1, 2, 2, 2, 4,
2, 3, 2, 6], device='cuda:0')
[=======================================================>.........] Step: 1s735ms | Tot: 10s223ms | Loss: 3.252 | Acc: 33.286% (233/700) 7/7
Epoch: 27
[==============================================================>..] Step: 801ms | Tot: 59s52ms | Loss: 0.341 | Acc: 87.457% (3061/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 0, 0, 3, 0, 0, 0, 1, 3, 0, 3, 3, 1, 0, 0, 3, 3, 0, 2, 0, 3, 1, 5, 1,
5, 5, 4, 0, 0, 0, 5, 3, 2, 0, 1, 0, 0, 0, 0, 3, 2, 2, 4, 5, 6, 1, 0, 1,
5, 0, 5, 0, 2, 1, 3, 4, 0, 2, 0, 4, 0, 3, 0, 1, 2, 0, 3, 2, 0, 0, 0, 0,
2, 2, 0, 0, 1, 0, 0, 5, 5, 0, 2, 1, 5, 0, 4, 3, 0, 5, 0, 1, 0, 1, 5, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 4, 3, 0, 0, 3, 1, 1, 1, 4, 1, 5, 4, 1, 0, 1, 5, 1, 1, 4, 4, 0, 5, 0,
6, 4, 0, 1, 0, 6, 0, 6, 0, 6, 0, 6, 0, 5, 6, 1, 1, 0, 0, 0, 0, 0, 3, 3,
0, 3, 5, 5, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 1, 3, 5, 4, 4, 5,
5, 2, 0, 0, 5, 5, 0, 0, 0, 0, 0, 0, 4, 5, 0, 1, 4, 1, 0, 4, 0, 0, 3, 3,
0, 0, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 5, 2, 2, 1, 6, 2, 2, 2, 2, 2, 2, 1, 2, 0, 2, 2, 3, 2, 2, 2, 5, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 2, 1, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 1, 2, 3,
2, 2, 2, 5, 2, 2, 1, 4, 3, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2,
6, 2, 1, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 0, 5, 5, 6, 3, 5, 6, 5, 0, 5, 1, 0, 5, 4, 0, 1, 1, 3, 1, 2, 1, 1,
5, 5, 4, 1, 1, 4, 1, 0, 4, 1, 3, 0, 0, 5, 5, 0, 0, 1, 1, 0, 1, 1, 3, 5,
5, 4, 5, 6, 3, 5, 3, 0, 1, 4, 0, 5, 3, 5, 3, 3, 3, 0, 0, 1, 1, 1, 3, 1,
3, 0, 0, 3, 3, 6, 1, 5, 0, 0, 1, 1, 5, 0, 0, 3, 3, 0, 1, 1, 1, 6, 5, 0,
3, 0, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 4, 4, 4, 4, 4, 1, 1, 4, 0, 1, 4, 4, 4, 4, 3, 3, 5, 5, 6, 0, 5, 4, 4,
4, 4, 4, 3, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 4, 5, 1, 1, 5, 4, 1, 4, 1,
4, 4, 1, 0, 4, 4, 4, 4, 1, 4, 4, 6, 3, 3, 3, 4, 4, 3, 1, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 5, 1, 5, 3, 0, 4, 4, 1, 6, 0, 4, 0, 0, 0, 0, 5, 4, 6, 4,
0, 0, 4, 0], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 5, 0, 0, 0, 5, 5, 3, 5, 0, 5, 2, 5, 1, 0, 1, 5, 5, 5, 0, 5, 5,
5, 5, 5, 0, 3, 1, 5, 0, 5, 5, 0, 5, 1, 3, 5, 5, 5, 5, 3, 5, 5, 5, 5, 0,
5, 3, 5, 5, 5, 5, 0, 5, 1, 1, 0, 0, 3, 5, 2, 3, 5, 5, 1, 0, 0, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 5, 3, 5, 5, 5, 5, 5, 5, 1, 0, 5, 5, 1, 3, 4, 5,
5, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 1, 1, 1, 3, 6, 2, 0, 1, 0, 1, 6, 4, 1, 1, 6,
2, 4, 2, 4, 3, 0, 3, 2, 2, 6, 1, 1, 5, 5, 3, 1, 6, 1, 6, 5, 6, 0, 0, 3,
1, 2, 4, 1, 0, 6, 3, 6, 0, 0, 5, 2, 3, 3, 0, 6, 0, 3, 1, 4, 1, 1, 5, 6,
1, 0, 0, 0, 2, 6, 1, 3, 6, 1, 0, 0, 4, 0, 3, 4, 0, 0, 0, 1, 4, 0, 2, 0,
0, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s730ms | Tot: 10s406ms | Loss: 2.737 | Acc: 39.143% (274/700) 7/7
Epoch: 28
[==============================================================>..] Step: 814ms | Tot: 58s291ms | Loss: 0.306 | Acc: 88.714% (3105/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 0, 0, 3, 0, 0, 0, 0, 6, 0, 2, 3, 0, 0, 0, 3, 3, 0, 2, 1, 3, 0, 2, 1,
3, 1, 3, 0, 0, 2, 0, 0, 2, 3, 4, 2, 0, 0, 0, 4, 0, 2, 6, 3, 0, 1, 0, 0,
1, 0, 0, 0, 2, 0, 0, 2, 0, 2, 0, 2, 0, 3, 2, 0, 0, 0, 0, 2, 0, 0, 2, 0,
2, 2, 0, 0, 1, 1, 2, 0, 2, 0, 2, 0, 0, 3, 2, 0, 3, 1, 0, 0, 0, 0, 3, 0,
2, 0, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([4, 0, 1, 3, 0, 3, 4, 3, 2, 1, 1, 2, 4, 2, 0, 0, 5, 4, 1, 0, 0, 0, 0, 0,
6, 0, 0, 6, 0, 6, 0, 6, 0, 6, 0, 6, 0, 2, 6, 3, 6, 0, 1, 0, 0, 0, 3, 3,
0, 2, 5, 0, 1, 0, 0, 0, 0, 0, 6, 2, 0, 0, 0, 6, 2, 0, 0, 3, 3, 2, 4, 5,
1, 2, 2, 0, 3, 0, 2, 0, 0, 0, 0, 0, 0, 6, 0, 4, 2, 6, 6, 0, 2, 0, 0, 3,
0, 0, 4, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 0, 2, 2, 2, 2, 2, 0, 2, 2, 0,
1, 2, 2, 2, 2, 2, 2, 1, 0, 2, 3, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3,
2, 2, 2, 2, 3, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 0, 0, 1, 3, 3, 5, 0, 2, 2, 4, 3, 2, 3, 4, 0, 0, 5, 3, 4, 0, 0, 0,
5, 5, 0, 3, 1, 4, 1, 1, 4, 1, 3, 0, 0, 2, 1, 6, 0, 5, 4, 0, 0, 4, 3, 1,
5, 0, 5, 6, 3, 3, 3, 0, 1, 0, 0, 0, 0, 1, 2, 1, 0, 0, 0, 1, 1, 0, 1, 0,
3, 0, 6, 4, 3, 6, 6, 5, 0, 0, 0, 2, 3, 0, 0, 2, 4, 0, 0, 1, 1, 6, 0, 0,
0, 0, 6, 0], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 6, 4, 3, 4, 1, 1, 1, 0, 0, 4, 4, 4, 6, 3, 0, 0, 1, 0, 0, 1, 4, 6,
0, 4, 4, 3, 6, 4, 4, 6, 4, 4, 6, 6, 4, 4, 3, 2, 5, 3, 6, 3, 4, 0, 4, 1,
0, 4, 0, 1, 6, 6, 0, 1, 0, 0, 2, 2, 3, 0, 0, 4, 6, 3, 4, 1, 4, 4, 1, 3,
3, 0, 6, 4, 1, 3, 6, 6, 3, 0, 3, 6, 2, 2, 0, 1, 0, 0, 0, 0, 0, 4, 0, 6,
0, 0, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([0, 0, 2, 0, 0, 0, 1, 5, 1, 3, 0, 0, 5, 2, 2, 1, 0, 0, 2, 0, 0, 0, 5, 0,
2, 5, 5, 0, 0, 0, 1, 3, 5, 2, 0, 0, 0, 3, 5, 0, 0, 3, 0, 5, 3, 0, 5, 0,
1, 0, 5, 3, 0, 1, 0, 5, 0, 3, 0, 0, 3, 5, 2, 3, 5, 1, 1, 0, 1, 1, 5, 5,
5, 3, 1, 3, 2, 4, 0, 2, 0, 2, 1, 5, 5, 1, 3, 3, 1, 0, 5, 5, 1, 0, 2, 0,
2, 1, 0, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 0, 3, 6, 2, 0, 3, 0, 3, 0, 4, 2, 3, 6,
2, 6, 2, 6, 0, 6, 6, 2, 0, 6, 6, 6, 2, 2, 3, 4, 6, 1, 2, 0, 0, 2, 6, 6,
1, 2, 4, 1, 0, 0, 3, 0, 0, 6, 4, 2, 3, 6, 3, 0, 0, 3, 6, 2, 3, 3, 5, 6,
4, 0, 0, 0, 2, 6, 6, 2, 6, 1, 2, 0, 6, 2, 0, 6, 0, 0, 3, 3, 2, 2, 2, 0,
2, 3, 0, 6], device='cuda:0')
[=======================================================>.........] Step: 1s697ms | Tot: 10s178ms | Loss: 3.708 | Acc: 32.429% (227/700) 7/7
Epoch: 29
[==============================================================>..] Step: 814ms | Tot: 58s261ms | Loss: 0.308 | Acc: 88.686% (3104/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 3, 3, 0, 0, 0, 0, 4, 0, 4, 3, 1, 0, 0, 3, 1, 3, 1, 3, 3, 3, 1, 1,
6, 5, 3, 0, 0, 0, 0, 0, 5, 3, 1, 0, 0, 0, 0, 4, 0, 0, 4, 3, 1, 5, 0, 3,
5, 0, 0, 0, 0, 0, 3, 4, 4, 2, 0, 4, 0, 3, 0, 5, 0, 3, 5, 0, 0, 0, 3, 0,
0, 0, 0, 1, 1, 1, 0, 5, 0, 0, 1, 0, 5, 3, 4, 3, 3, 5, 0, 4, 0, 4, 0, 0,
0, 3, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 0, 1, 5, 0, 3, 4, 3, 1, 0, 4, 5, 4, 0, 3, 0, 5, 1, 1, 6, 4, 0, 0, 3,
1, 0, 0, 3, 0, 6, 0, 0, 0, 0, 0, 6, 0, 2, 1, 3, 0, 0, 0, 0, 1, 0, 3, 3,
4, 2, 1, 5, 1, 0, 0, 6, 1, 0, 1, 2, 0, 0, 1, 4, 2, 5, 0, 3, 6, 4, 4, 5,
5, 4, 6, 0, 5, 0, 0, 0, 0, 0, 5, 3, 6, 4, 0, 4, 4, 6, 0, 4, 2, 0, 0, 3,
0, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 3, 2, 2, 0, 0, 2, 2, 0, 6, 2, 2, 0, 2, 0, 2, 2, 3, 2, 2, 2, 5, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 1, 3, 5, 2, 2, 2, 2, 2, 2, 0, 2, 2, 0, 2, 2,
2, 2, 2, 1, 0, 5, 2, 2, 2, 2, 2, 2, 2, 0, 2, 4, 2, 3, 4, 2, 0, 0, 3, 3,
2, 2, 2, 5, 3, 2, 4, 3, 1, 2, 2, 2, 2, 3, 2, 2, 0, 2, 0, 2, 2, 1, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 3, 5, 5, 6, 3, 5, 6, 2, 0, 6, 3, 3, 5, 4, 0, 1, 5, 3, 4, 0, 0, 0,
5, 5, 4, 3, 1, 4, 3, 5, 1, 1, 1, 1, 1, 0, 5, 4, 3, 5, 1, 3, 1, 4, 3, 1,
0, 1, 5, 6, 3, 1, 3, 0, 1, 0, 1, 0, 1, 1, 2, 1, 3, 0, 0, 1, 6, 0, 1, 0,
3, 0, 3, 3, 3, 4, 3, 0, 3, 2, 0, 0, 3, 0, 3, 4, 4, 0, 0, 3, 3, 6, 5, 1,
5, 3, 1, 0], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 4, 4, 4, 3, 4, 1, 1, 3, 0, 1, 4, 4, 4, 4, 3, 3, 3, 1, 3, 0, 3, 4, 6,
6, 4, 4, 3, 6, 4, 4, 4, 4, 4, 0, 3, 4, 4, 3, 2, 5, 3, 4, 5, 4, 5, 3, 1,
2, 4, 4, 6, 6, 6, 0, 4, 1, 6, 4, 0, 3, 3, 3, 4, 6, 3, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 3, 4, 6, 6, 3, 0, 6, 4, 0, 2, 0, 4, 0, 0, 0, 4, 1, 4, 0, 6,
0, 4, 4, 0], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 3, 0, 3, 0, 0, 5, 5, 3, 5, 0, 5, 0, 2, 3, 0, 3, 0, 5, 5, 4, 5, 5,
0, 5, 5, 5, 0, 0, 5, 0, 5, 0, 0, 3, 3, 5, 5, 5, 5, 3, 1, 5, 3, 5, 5, 0,
5, 5, 5, 3, 5, 5, 0, 5, 0, 3, 4, 0, 3, 5, 2, 3, 5, 5, 1, 3, 5, 1, 5, 5,
5, 5, 5, 5, 0, 4, 0, 0, 0, 3, 1, 5, 5, 5, 0, 3, 1, 0, 5, 5, 1, 3, 4, 5,
0, 3, 5, 0], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 3, 1, 2, 1, 3, 0, 0, 0, 0, 1, 3, 1, 2, 0, 3, 0, 3, 4, 4, 1, 3, 6,
2, 3, 0, 4, 0, 0, 3, 2, 0, 6, 1, 6, 5, 5, 3, 4, 6, 1, 6, 0, 6, 1, 0, 4,
3, 0, 4, 1, 1, 0, 3, 6, 0, 0, 4, 0, 3, 6, 3, 0, 0, 3, 1, 4, 3, 1, 5, 6,
5, 1, 0, 0, 0, 6, 5, 3, 6, 6, 0, 0, 0, 0, 3, 6, 5, 3, 0, 3, 4, 3, 2, 3,
0, 6, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s695ms | Tot: 10s394ms | Loss: 3.429 | Acc: 34.714% (243/700) 7/7
Epoch: 30
[==============================================================>..] Step: 785ms | Tot: 58s109ms | Loss: 0.279 | Acc: 89.886% (3146/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 3, 3, 0, 4, 0, 0, 6, 0, 0, 3, 0, 0, 0, 3, 6, 0, 2, 0, 3, 4, 1, 3,
5, 1, 4, 6, 6, 0, 5, 6, 0, 0, 4, 0, 6, 0, 0, 4, 0, 0, 4, 3, 6, 1, 0, 5,
5, 0, 0, 6, 0, 0, 4, 4, 0, 2, 0, 2, 0, 3, 0, 3, 0, 6, 5, 2, 0, 0, 1, 5,
2, 2, 0, 1, 1, 4, 0, 5, 0, 6, 1, 1, 5, 5, 4, 1, 3, 0, 0, 1, 0, 1, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 3, 4, 3, 1, 4, 4, 5, 4, 1, 4, 4, 5, 1, 3, 4, 6, 0, 5, 1,
6, 4, 0, 5, 4, 6, 0, 6, 3, 6, 0, 6, 0, 5, 6, 0, 4, 0, 4, 5, 1, 6, 3, 3,
1, 2, 5, 5, 1, 0, 4, 6, 1, 1, 1, 2, 0, 0, 0, 4, 2, 5, 4, 3, 0, 6, 4, 5,
5, 2, 6, 0, 5, 5, 5, 4, 0, 0, 1, 0, 4, 4, 6, 4, 6, 5, 0, 4, 2, 0, 5, 3,
6, 5, 5, 4], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 3, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 5, 2, 0, 2, 2, 3, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 1, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 3, 2, 2, 2, 5, 2, 2,
2, 2, 2, 5, 5, 2, 1, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 6, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 3, 5, 5, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 0, 0, 5,
5, 5, 6, 1, 1, 4, 3, 5, 4, 5, 5, 3, 0, 5, 5, 4, 1, 5, 1, 0, 6, 6, 5, 6,
5, 6, 5, 6, 3, 0, 3, 4, 1, 4, 1, 5, 0, 1, 2, 1, 5, 6, 6, 1, 6, 0, 6, 4,
3, 0, 6, 3, 6, 6, 5, 5, 1, 1, 6, 5, 5, 0, 0, 4, 4, 0, 5, 4, 5, 6, 5, 3,
5, 3, 3, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 6, 4, 3, 6, 1, 4, 4, 4, 1, 4, 4, 4, 4, 3, 3, 5, 5, 6, 4, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 6, 4, 4, 4, 4, 4, 6, 4, 4, 5, 1, 6, 5, 4, 5, 1, 1,
4, 4, 1, 0, 6, 4, 4, 6, 5, 4, 5, 6, 3, 0, 5, 4, 6, 3, 4, 4, 6, 4, 4, 3,
1, 4, 4, 4, 4, 5, 6, 6, 4, 4, 4, 4, 4, 6, 6, 4, 6, 0, 0, 4, 5, 4, 6, 4,
0, 4, 4, 6], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 6, 5, 5, 5, 5, 1, 3, 5, 5, 5, 5, 3, 1, 5, 4, 5, 5, 5, 0, 5, 5,
0, 5, 5, 1, 0, 0, 5, 0, 5, 0, 0, 5, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 0,
5, 5, 5, 5, 5, 5, 0, 5, 4, 3, 0, 0, 3, 5, 2, 3, 5, 5, 1, 4, 5, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 2, 1, 5, 5, 3, 1, 3, 1, 5, 5, 5, 5, 5, 1, 5,
5, 5, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 3, 6, 2, 1, 3, 2, 0, 1, 1, 6, 3, 6, 2, 0, 3, 4, 3, 1, 4, 1, 3, 6,
2, 6, 0, 6, 0, 6, 3, 2, 0, 6, 6, 6, 2, 5, 0, 4, 6, 4, 6, 0, 6, 1, 6, 4,
1, 6, 1, 1, 1, 6, 5, 6, 0, 6, 1, 2, 5, 6, 3, 6, 0, 3, 6, 6, 3, 4, 5, 6,
4, 4, 2, 6, 2, 6, 6, 3, 6, 4, 0, 0, 6, 3, 3, 4, 0, 3, 6, 3, 2, 6, 2, 4,
0, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s743ms | Tot: 10s204ms | Loss: 2.731 | Acc: 40.143% (281/700) 7/7
Saving..
Epoch: 31
[==============================================================>..] Step: 775ms | Tot: 58s374ms | Loss: 0.228 | Acc: 91.886% (3216/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([1, 1, 0, 3, 0, 0, 0, 0, 5, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1,
0, 1, 1, 0, 0, 0, 0, 5, 0, 0, 1, 0, 0, 6, 0, 5, 2, 0, 0, 3, 1, 1, 0, 1,
5, 0, 1, 6, 0, 0, 1, 4, 0, 2, 0, 4, 0, 1, 0, 1, 0, 6, 1, 0, 0, 0, 3, 1,
2, 1, 0, 1, 1, 1, 0, 1, 3, 6, 1, 1, 5, 1, 2, 1, 1, 1, 0, 1, 0, 0, 0, 0,
0, 1, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([6, 6, 1, 0, 0, 1, 1, 1, 1, 4, 1, 1, 6, 1, 0, 1, 5, 1, 1, 4, 1, 0, 1, 0,
6, 0, 0, 1, 3, 6, 0, 6, 1, 6, 0, 6, 0, 5, 1, 1, 6, 0, 1, 0, 1, 6, 3, 1,
0, 2, 1, 1, 1, 0, 1, 1, 1, 1, 1, 2, 0, 0, 1, 1, 1, 0, 0, 0, 6, 6, 4, 5,
1, 6, 1, 1, 5, 3, 0, 0, 0, 1, 1, 0, 6, 1, 0, 1, 6, 6, 0, 1, 1, 0, 0, 1,
1, 0, 1, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 0, 1, 2, 2, 0, 2, 0, 2, 2, 2, 2, 2, 2, 1, 2, 0,
1, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 6, 2, 6, 2, 3, 2, 2, 5, 2, 0, 0,
0, 2, 2, 1, 2, 2, 1, 0, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1,
2, 2, 1, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 0, 5, 1, 1, 1, 5, 6, 1, 0, 1, 1, 1, 1, 4, 0, 1, 5, 1, 1, 0, 0, 0,
1, 1, 0, 1, 1, 0, 1, 0, 4, 1, 1, 0, 1, 0, 1, 0, 0, 5, 1, 0, 1, 6, 1, 1,
1, 1, 5, 6, 3, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 3, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 6, 6, 6, 5, 3, 1, 1, 2, 1, 0, 1, 5, 1, 1, 1, 1, 1, 6, 3, 1,
1, 1, 6, 0], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 6, 6, 3, 6, 1, 1, 0, 6, 1, 4, 6, 4, 6, 1, 3, 5, 1, 1, 0, 1, 4, 6,
6, 4, 6, 1, 6, 4, 4, 6, 4, 4, 0, 4, 6, 6, 1, 4, 1, 1, 6, 1, 4, 1, 6, 1,
4, 4, 1, 6, 6, 4, 6, 1, 0, 4, 5, 6, 1, 0, 5, 4, 6, 1, 4, 4, 6, 6, 4, 1,
1, 1, 4, 1, 1, 4, 6, 6, 1, 1, 4, 4, 4, 1, 6, 1, 6, 0, 5, 1, 1, 4, 6, 6,
0, 6, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 1, 1, 1, 1, 1, 1, 5, 1, 1, 5, 3, 1, 2, 2, 1, 0, 0, 1, 1, 1, 0, 1, 1,
1, 5, 5, 1, 0, 1, 1, 6, 1, 2, 0, 1, 1, 1, 1, 5, 0, 1, 1, 0, 1, 5, 5, 0,
5, 3, 1, 1, 1, 1, 0, 5, 1, 1, 0, 0, 1, 5, 2, 5, 5, 1, 0, 0, 1, 1, 1, 5,
5, 1, 3, 1, 0, 6, 1, 1, 0, 1, 1, 5, 1, 3, 1, 1, 1, 3, 0, 1, 1, 3, 1, 1,
0, 1, 1, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 6, 0, 1, 2, 1, 3, 0, 1, 1, 1, 0, 0, 6, 2, 0, 1, 1, 1, 6, 5, 1, 1, 6,
2, 6, 0, 6, 0, 0, 1, 2, 2, 6, 6, 1, 1, 1, 2, 4, 6, 1, 6, 0, 6, 1, 6, 5,
1, 6, 4, 1, 1, 6, 1, 6, 0, 6, 1, 2, 1, 6, 1, 6, 0, 1, 1, 6, 1, 1, 5, 6,
1, 0, 1, 1, 0, 6, 1, 1, 6, 6, 1, 0, 1, 1, 0, 6, 6, 3, 6, 0, 0, 1, 0, 1,
0, 6, 1, 6], device='cuda:0')
[=======================================================>.........] Step: 1s721ms | Tot: 10s299ms | Loss: 3.601 | Acc: 33.429% (234/700) 7/7
Epoch: 32
[==============================================================>..] Step: 706ms | Tot: 58s275ms | Loss: 0.169 | Acc: 94.371% (3303/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 0, 3, 0, 6, 0, 0, 6, 0, 1, 1, 0, 0, 0, 5, 0, 0, 2, 0, 0, 0, 5, 1,
5, 1, 4, 6, 0, 0, 0, 5, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 6, 5, 0, 0, 0, 1,
5, 0, 0, 6, 0, 0, 1, 4, 0, 2, 0, 6, 0, 0, 0, 3, 0, 0, 5, 0, 0, 0, 0, 0,
0, 2, 0, 5, 0, 0, 0, 1, 0, 0, 1, 1, 5, 0, 6, 5, 0, 5, 0, 1, 0, 0, 5, 0,
0, 0, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 4, 1, 5, 0, 5, 4, 1, 1, 6, 5, 2, 4, 1, 1, 6, 5, 1, 5, 4, 4, 0, 5, 0,
6, 4, 0, 5, 4, 6, 0, 0, 0, 6, 0, 6, 0, 0, 6, 1, 1, 0, 4, 0, 0, 6, 0, 0,
1, 2, 5, 6, 1, 0, 0, 0, 2, 1, 2, 2, 0, 0, 0, 5, 2, 0, 0, 1, 2, 5, 4, 5,
0, 6, 0, 0, 5, 0, 0, 0, 0, 0, 0, 0, 4, 1, 0, 1, 6, 6, 0, 4, 2, 0, 5, 1,
6, 0, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 5, 2, 2, 1, 2, 0, 2, 0, 2, 2, 2, 6, 5, 2, 5,
1, 2, 2, 2, 2, 2, 6, 2, 2, 2, 5, 5, 2, 2, 2, 2, 2, 0, 2, 2, 2, 0, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5, 2, 2, 2, 2, 2, 5,
2, 2, 5, 5, 0, 2, 5, 2, 1, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 0, 5, 1, 6, 0, 5, 6, 5, 0, 4, 6, 0, 5, 4, 0, 2, 0, 5, 4, 0, 6, 0,
5, 1, 4, 1, 1, 4, 5, 0, 4, 5, 5, 0, 1, 0, 5, 0, 0, 5, 5, 0, 1, 5, 1, 1,
5, 6, 5, 6, 3, 0, 1, 5, 1, 0, 1, 5, 0, 5, 2, 1, 5, 0, 0, 1, 6, 0, 0, 0,
5, 0, 0, 4, 5, 6, 5, 5, 0, 2, 6, 0, 1, 0, 0, 4, 5, 0, 0, 0, 5, 6, 0, 1,
5, 5, 6, 0], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 6, 4, 4, 4, 1, 1, 0, 0, 0, 4, 4, 4, 4, 4, 3, 5, 0, 6, 0, 1, 4, 4,
6, 4, 4, 3, 1, 4, 4, 6, 4, 4, 4, 4, 4, 4, 5, 6, 5, 5, 6, 5, 4, 1, 4, 1,
4, 4, 1, 0, 4, 4, 4, 1, 2, 4, 4, 6, 0, 0, 0, 4, 6, 5, 4, 4, 6, 4, 4, 5,
5, 1, 4, 4, 1, 5, 5, 5, 5, 1, 4, 4, 2, 2, 6, 0, 6, 0, 0, 0, 1, 4, 6, 4,
0, 0, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 1, 5, 1, 5, 5, 5, 5, 0, 5, 2, 2, 0, 0, 5, 5, 5, 5, 6, 5, 0,
0, 5, 5, 0, 0, 0, 5, 6, 5, 0, 0, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 0,
0, 5, 5, 5, 0, 5, 0, 5, 0, 1, 0, 0, 5, 5, 2, 5, 5, 5, 0, 0, 0, 1, 0, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 1, 1, 1, 5, 5, 0, 5, 5, 0, 5, 5, 5, 5, 4, 5,
0, 5, 0, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 1, 0, 0, 2, 0, 0, 0, 6, 2, 0, 1, 0, 1, 0, 4, 1, 1, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 6, 5, 2, 0, 4, 6, 1, 6, 0, 6, 2, 6, 4,
0, 0, 2, 1, 6, 6, 5, 6, 0, 0, 0, 2, 5, 6, 1, 0, 0, 1, 6, 5, 1, 1, 5, 6,
5, 0, 0, 0, 0, 6, 2, 1, 6, 2, 0, 0, 6, 0, 0, 6, 0, 5, 6, 5, 0, 5, 2, 5,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s689ms | Tot: 10s178ms | Loss: 3.385 | Acc: 39.429% (276/700) 7/7
Epoch: 33
[==============================================================>..] Step: 834ms | Tot: 58s419ms | Loss: 0.174 | Acc: 93.629% (3277/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 4, 3, 3, 0, 1, 1, 4, 0, 1, 3, 1, 1, 0, 3, 3, 5, 2, 5, 3, 4, 1, 1,
5, 5, 3, 3, 6, 2, 0, 5, 2, 5, 1, 2, 1, 1, 0, 4, 0, 0, 4, 3, 3, 1, 0, 4,
5, 0, 2, 0, 2, 0, 1, 4, 4, 2, 0, 4, 0, 5, 3, 5, 4, 3, 5, 2, 0, 0, 3, 5,
2, 2, 0, 1, 1, 1, 5, 1, 5, 0, 2, 5, 1, 4, 4, 3, 3, 3, 0, 1, 5, 4, 3, 0,
4, 3, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([6, 4, 1, 1, 5, 3, 4, 3, 5, 4, 4, 5, 4, 2, 3, 0, 5, 1, 1, 4, 4, 1, 5, 1,
1, 4, 1, 1, 0, 6, 0, 6, 4, 1, 1, 6, 3, 4, 6, 1, 3, 1, 4, 4, 1, 3, 3, 3,
1, 5, 5, 5, 1, 0, 3, 0, 1, 5, 1, 2, 2, 0, 1, 4, 2, 2, 4, 4, 3, 6, 4, 5,
5, 4, 2, 1, 3, 3, 2, 4, 1, 4, 1, 0, 5, 4, 0, 4, 2, 1, 6, 4, 5, 0, 5, 1,
4, 4, 5, 5], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 3, 2, 2, 1, 2, 0, 2, 2, 3, 2, 2, 2, 2, 2, 5,
1, 2, 2, 5, 1, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 6, 2, 2, 1, 5, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5, 2, 3,
2, 2, 2, 2, 3, 2, 4, 4, 2, 2, 2, 2, 2, 3, 2, 2, 1, 2, 1, 2, 3, 2, 2, 2,
5, 2, 3, 1], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([1, 3, 6, 5, 5, 3, 3, 5, 3, 3, 2, 4, 5, 1, 3, 4, 5, 1, 5, 5, 4, 2, 5, 5,
5, 5, 4, 3, 1, 4, 1, 5, 1, 5, 3, 1, 1, 4, 5, 4, 1, 5, 1, 3, 5, 4, 3, 1,
5, 1, 5, 6, 3, 3, 3, 1, 5, 4, 1, 5, 1, 1, 2, 1, 3, 6, 0, 5, 1, 5, 6, 3,
3, 0, 1, 1, 3, 6, 1, 5, 3, 2, 1, 5, 3, 0, 1, 4, 4, 1, 5, 1, 5, 6, 5, 1,
1, 1, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 4, 4, 4, 3, 4, 5, 1, 4, 4, 1, 4, 4, 4, 6, 3, 1, 5, 1, 6, 3, 3, 4, 1,
6, 4, 4, 3, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 4, 5, 3, 4, 1, 4, 1, 3, 1,
4, 4, 1, 1, 6, 1, 4, 3, 2, 6, 4, 1, 3, 3, 4, 4, 6, 3, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 1, 5, 3, 1, 4, 4, 4, 5, 3, 4, 1, 1, 0, 3, 5, 4, 6, 4,
3, 4, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 5, 1, 5, 5, 1, 5, 5, 5, 2, 4, 3, 1, 1, 5, 5, 5, 4, 5, 5,
5, 5, 5, 1, 5, 1, 5, 3, 5, 2, 2, 5, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 5,
5, 3, 5, 3, 5, 5, 0, 5, 1, 3, 4, 0, 3, 5, 2, 3, 5, 5, 1, 1, 1, 1, 5, 5,
5, 5, 5, 5, 5, 1, 5, 2, 1, 3, 5, 5, 5, 5, 5, 3, 1, 1, 5, 5, 5, 5, 4, 5,
5, 5, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 4, 1, 2, 1, 1, 0, 1, 1, 0, 4, 3, 1, 2, 0, 3, 0, 1, 6, 4, 1, 1, 6,
2, 1, 0, 4, 3, 5, 3, 2, 1, 2, 1, 6, 5, 4, 3, 4, 6, 4, 6, 1, 6, 1, 1, 4,
1, 2, 1, 1, 1, 6, 3, 3, 1, 6, 5, 2, 3, 6, 1, 1, 0, 3, 1, 1, 3, 1, 5, 6,
1, 1, 2, 0, 2, 6, 6, 3, 6, 1, 4, 2, 6, 2, 3, 1, 5, 3, 3, 1, 4, 5, 2, 5,
1, 3, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s733ms | Tot: 10s277ms | Loss: 4.178 | Acc: 36.571% (256/700) 7/7
Epoch: 34
[==============================================================>..] Step: 787ms | Tot: 58s601ms | Loss: 0.143 | Acc: 95.143% (3330/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 0, 3, 1, 0, 1, 0, 5, 0, 1, 1, 1, 0, 0, 0, 3, 0, 1, 1, 3, 1, 0, 1,
5, 5, 3, 6, 0, 0, 5, 3, 0, 0, 1, 0, 0, 0, 0, 5, 2, 0, 1, 3, 6, 1, 0, 1,
5, 0, 0, 0, 2, 0, 3, 5, 0, 2, 0, 0, 0, 3, 0, 3, 0, 0, 5, 2, 0, 0, 3, 0,
0, 0, 0, 3, 1, 1, 0, 1, 0, 0, 1, 1, 5, 5, 5, 3, 3, 0, 0, 1, 0, 0, 5, 0,
5, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 6, 3, 1, 0, 1, 1, 1, 1, 4, 1, 5, 4, 1, 1, 0, 5, 1, 1, 4, 0, 0, 0, 1,
1, 0, 0, 1, 5, 6, 0, 1, 1, 6, 0, 6, 0, 5, 5, 1, 0, 1, 0, 0, 1, 0, 3, 3,
1, 3, 5, 0, 1, 0, 4, 0, 1, 1, 1, 2, 0, 0, 0, 1, 1, 0, 0, 1, 3, 1, 1, 5,
5, 2, 0, 0, 5, 5, 0, 0, 0, 0, 0, 0, 4, 5, 0, 5, 0, 6, 0, 0, 1, 0, 0, 3,
0, 5, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 0, 5, 2, 2, 0, 2, 0, 2, 1, 3, 2, 2, 0, 5, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 0,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 3, 2, 2, 0, 1, 2, 3,
2, 2, 3, 5, 1, 2, 1, 2, 3, 1, 2, 2, 2, 2, 2, 2, 1, 3, 1, 2, 2, 2, 2, 1,
2, 2, 1, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 3, 5, 5, 5, 3, 5, 0, 3, 0, 5, 5, 1, 5, 4, 5, 1, 1, 3, 1, 0, 0, 0,
5, 5, 0, 1, 1, 5, 1, 0, 4, 5, 3, 5, 1, 5, 5, 0, 0, 5, 1, 3, 1, 5, 1, 0,
5, 5, 5, 3, 3, 5, 0, 5, 1, 0, 1, 5, 1, 5, 2, 1, 5, 0, 0, 5, 6, 0, 1, 0,
3, 0, 1, 3, 1, 6, 1, 5, 0, 1, 0, 1, 3, 0, 0, 5, 5, 1, 1, 1, 1, 6, 3, 0,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 4, 4, 4, 1, 1, 1, 1, 1, 0, 1, 4, 1, 4, 4, 3, 3, 5, 0, 3, 0, 1, 1, 6,
4, 4, 4, 3, 1, 4, 0, 6, 4, 4, 1, 4, 4, 4, 4, 4, 5, 1, 6, 1, 4, 1, 4, 1,
5, 4, 1, 0, 6, 4, 0, 1, 5, 4, 0, 6, 1, 3, 1, 4, 6, 3, 1, 1, 4, 4, 1, 1,
3, 1, 4, 5, 3, 5, 3, 5, 3, 0, 3, 4, 4, 1, 3, 1, 0, 0, 5, 1, 1, 5, 1, 4,
5, 0, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 5, 1, 5, 5, 3, 5, 5, 5, 0, 1, 1, 5, 1, 5, 5, 1, 0, 5, 5,
5, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 5, 1, 3, 5, 5, 5, 3, 1, 5, 3, 5, 5, 0,
5, 3, 5, 1, 5, 5, 0, 5, 0, 0, 0, 0, 5, 5, 2, 3, 5, 5, 5, 0, 5, 1, 5, 5,
5, 5, 5, 5, 0, 4, 5, 0, 0, 3, 5, 5, 5, 3, 0, 3, 1, 5, 5, 5, 5, 5, 1, 5,
0, 1, 5, 5], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 1, 0, 1, 2, 1, 3, 0, 1, 1, 0, 0, 3, 1, 2, 0, 3, 5, 1, 0, 5, 1, 3, 6,
2, 1, 0, 1, 0, 0, 3, 0, 1, 6, 1, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 5,
0, 0, 1, 1, 1, 6, 1, 6, 0, 0, 1, 0, 1, 1, 3, 6, 0, 0, 1, 6, 1, 1, 5, 6,
1, 1, 1, 1, 0, 6, 5, 1, 6, 1, 1, 0, 6, 3, 3, 6, 0, 3, 0, 3, 0, 3, 5, 3,
0, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s676ms | Tot: 10s223ms | Loss: 3.624 | Acc: 36.429% (255/700) 7/7
Epoch: 35
[==============================================================>..] Step: 766ms | Tot: 58s159ms | Loss: 0.101 | Acc: 96.657% (3383/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 1, 3, 1, 0, 1, 1, 6, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 4, 1, 1,
5, 1, 4, 6, 6, 2, 1, 5, 1, 1, 1, 0, 0, 0, 1, 0, 0, 0, 4, 3, 6, 1, 0, 1,
5, 4, 1, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 5, 1, 1, 0, 0, 5, 0, 0, 0, 3, 1,
0, 2, 0, 1, 1, 1, 0, 1, 5, 6, 1, 5, 1, 4, 0, 1, 3, 0, 1, 1, 0, 0, 5, 0,
0, 1, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 4, 1, 4, 5, 4, 1, 1, 1, 4, 4, 5, 4, 1, 5, 1, 5, 1, 1, 4, 5, 5, 1, 0,
1, 4, 1, 1, 3, 6, 0, 6, 1, 6, 0, 6, 6, 4, 1, 1, 1, 0, 4, 0, 1, 6, 5, 3,
4, 5, 5, 5, 1, 0, 1, 0, 1, 5, 1, 6, 0, 5, 1, 4, 1, 0, 0, 1, 6, 6, 4, 5,
5, 6, 2, 1, 0, 0, 5, 1, 0, 4, 1, 0, 4, 4, 0, 4, 4, 6, 6, 4, 1, 0, 0, 1,
4, 0, 5, 5], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 3, 2, 2, 2, 6, 2, 2, 0, 6, 2, 2, 1, 2, 0, 2, 2, 2, 2, 2, 0, 2, 2, 5,
1, 2, 2, 2, 5, 2, 1, 1, 0, 1, 5, 5, 2, 2, 2, 2, 2, 2, 6, 0, 2, 0, 2, 2,
2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 6, 2, 0, 2, 2, 2, 6, 2, 2, 2, 1, 1, 6,
2, 2, 2, 1, 2, 2, 4, 4, 2, 1, 2, 2, 2, 1, 6, 2, 1, 2, 1, 2, 2, 6, 2, 2,
2, 2, 1, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 1, 5, 5, 6, 5, 5, 6, 2, 0, 6, 5, 0, 5, 4, 0, 1, 5, 5, 4, 6, 5, 0,
5, 5, 6, 5, 1, 3, 1, 5, 4, 1, 1, 1, 1, 4, 5, 4, 1, 5, 1, 3, 5, 4, 1, 1,
5, 5, 5, 6, 3, 1, 4, 4, 5, 4, 1, 5, 1, 1, 2, 1, 1, 6, 0, 5, 6, 0, 1, 1,
3, 5, 1, 1, 1, 6, 1, 5, 1, 0, 1, 2, 1, 0, 1, 3, 4, 0, 0, 1, 1, 6, 5, 1,
1, 1, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 4, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 4, 4, 5, 6, 0, 1, 4, 6,
4, 4, 4, 4, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 5, 1, 4, 2, 4, 1, 4, 1,
4, 4, 1, 6, 6, 4, 4, 4, 5, 4, 4, 6, 1, 3, 5, 4, 4, 3, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 4, 4, 6, 6, 1, 1, 1, 4, 4, 5, 5, 1, 1, 4, 5, 0, 5, 4, 1, 4,
4, 4, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 5, 1, 5, 5, 1, 5, 0, 5, 2, 4, 1, 1, 1, 5, 1, 5, 1, 5, 5,
1, 5, 5, 1, 5, 1, 5, 6, 5, 5, 1, 1, 1, 5, 1, 5, 5, 1, 1, 5, 1, 1, 5, 5,
5, 5, 5, 5, 5, 5, 0, 5, 1, 1, 5, 0, 5, 5, 2, 5, 5, 5, 1, 1, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 1, 3, 1, 5, 1, 5, 5, 1, 1, 5, 5, 5, 1, 5, 4, 5,
0, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 1, 0, 1, 1, 1, 6, 1, 1, 2, 0, 4, 0, 1, 1, 5, 1, 1, 6,
6, 6, 0, 4, 0, 5, 6, 2, 1, 6, 4, 6, 5, 5, 5, 4, 6, 1, 6, 1, 6, 1, 6, 3,
1, 6, 1, 1, 1, 0, 4, 6, 1, 6, 1, 1, 4, 6, 1, 6, 0, 1, 1, 6, 1, 4, 5, 6,
1, 1, 0, 6, 6, 6, 1, 1, 6, 6, 1, 0, 4, 1, 0, 4, 6, 1, 5, 1, 4, 6, 5, 4,
0, 4, 6, 6], device='cuda:0')
[=======================================================>.........] Step: 1s734ms | Tot: 10s228ms | Loss: 3.557 | Acc: 38.857% (272/700) 7/7
Epoch: 36
[==============================================================>..] Step: 801ms | Tot: 58s240ms | Loss: 0.084 | Acc: 97.114% (3399/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 3, 3, 3, 6, 0, 0, 6, 1, 3, 1, 0, 0, 0, 1, 2, 0, 0, 1, 3, 1, 5, 1,
3, 1, 4, 6, 6, 0, 1, 4, 2, 0, 1, 0, 3, 0, 0, 0, 2, 0, 3, 3, 6, 1, 3, 1,
5, 3, 2, 4, 2, 1, 1, 4, 0, 2, 0, 3, 3, 3, 0, 3, 6, 3, 5, 3, 0, 0, 3, 0,
0, 0, 0, 3, 1, 1, 0, 1, 3, 6, 2, 1, 5, 1, 5, 3, 3, 0, 0, 5, 0, 0, 4, 0,
3, 5, 3, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 3, 0, 1, 1, 3, 1, 4, 4, 2, 6, 1, 3, 1, 5, 4, 1, 4, 4, 5, 5, 3,
6, 4, 3, 5, 4, 6, 1, 5, 1, 6, 3, 6, 0, 0, 1, 1, 1, 3, 4, 3, 1, 6, 3, 3,
4, 2, 1, 0, 1, 3, 1, 3, 1, 1, 2, 3, 2, 0, 1, 4, 6, 0, 4, 3, 3, 6, 6, 5,
5, 6, 0, 1, 0, 0, 4, 1, 0, 1, 0, 3, 4, 1, 0, 4, 6, 0, 6, 4, 2, 0, 5, 1,
6, 0, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 1, 2, 6, 2, 2, 3, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 6, 1, 2, 2, 1, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 2, 2, 2, 3, 2, 2, 2, 2, 2, 5,
2, 2, 2, 5, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 6, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([3, 0, 3, 5, 1, 3, 3, 5, 6, 3, 2, 4, 4, 3, 3, 4, 0, 0, 1, 3, 4, 0, 2, 5,
5, 5, 4, 1, 1, 3, 1, 0, 1, 1, 3, 5, 0, 0, 5, 4, 1, 3, 1, 3, 3, 4, 1, 1,
1, 1, 5, 6, 3, 1, 3, 4, 1, 0, 1, 5, 3, 1, 2, 3, 5, 0, 6, 1, 6, 0, 1, 4,
3, 1, 1, 3, 1, 6, 5, 3, 3, 0, 6, 2, 1, 0, 3, 5, 3, 0, 1, 1, 5, 6, 5, 3,
5, 5, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 6, 4, 4, 6, 1, 1, 3, 4, 1, 4, 4, 4, 4, 3, 3, 5, 1, 3, 4, 1, 4, 6,
4, 4, 4, 1, 3, 4, 4, 4, 4, 4, 6, 4, 4, 4, 4, 6, 1, 4, 6, 3, 4, 1, 3, 1,
4, 4, 5, 0, 4, 4, 4, 3, 0, 4, 0, 6, 3, 3, 5, 4, 6, 3, 4, 4, 4, 4, 4, 4,
3, 3, 4, 4, 1, 4, 3, 5, 6, 1, 3, 4, 4, 2, 6, 4, 6, 4, 0, 1, 1, 4, 1, 6,
1, 4, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 2, 1, 0, 1, 1, 5, 5, 6, 5, 5,
5, 5, 5, 5, 0, 1, 1, 6, 5, 0, 1, 5, 1, 3, 5, 5, 5, 3, 1, 4, 3, 5, 5, 0,
5, 3, 5, 1, 0, 1, 5, 5, 1, 1, 5, 0, 5, 5, 2, 3, 5, 5, 5, 1, 3, 1, 5, 5,
5, 5, 5, 5, 2, 4, 0, 2, 0, 3, 3, 1, 1, 3, 3, 1, 1, 3, 5, 5, 5, 3, 4, 0,
2, 5, 3, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 3, 1, 2, 1, 3, 2, 1, 1, 1, 6, 3, 1, 2, 0, 1, 0, 3, 6, 4, 1, 1, 6,
2, 6, 0, 4, 3, 6, 6, 2, 3, 6, 6, 5, 1, 5, 3, 1, 6, 1, 6, 0, 6, 1, 6, 3,
1, 0, 2, 1, 1, 6, 3, 6, 0, 6, 5, 2, 5, 6, 3, 6, 0, 1, 1, 6, 1, 1, 5, 6,
5, 3, 2, 3, 2, 6, 2, 3, 6, 1, 0, 0, 1, 3, 3, 5, 5, 3, 6, 3, 3, 6, 3, 3,
2, 6, 0, 6], device='cuda:0')
[=======================================================>.........] Step: 1s709ms | Tot: 10s245ms | Loss: 3.292 | Acc: 39.143% (274/700) 7/7
Epoch: 37
[==============================================================>..] Step: 768ms | Tot: 58s466ms | Loss: 0.060 | Acc: 98.229% (3438/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([1, 1, 1, 1, 3, 1, 0, 0, 6, 0, 1, 3, 1, 0, 0, 1, 3, 1, 2, 1, 3, 1, 5, 1,
5, 1, 4, 6, 0, 0, 0, 1, 0, 0, 1, 0, 5, 1, 0, 0, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 1, 0, 0, 1, 1, 4, 0, 2, 0, 3, 0, 3, 5, 1, 0, 6, 5, 3, 0, 0, 0, 5,
0, 1, 0, 1, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 1, 3, 0, 0, 5, 0, 1, 5, 0,
0, 5, 1, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([6, 6, 1, 1, 1, 1, 1, 1, 1, 4, 1, 2, 6, 1, 1, 1, 1, 1, 1, 4, 4, 5, 1, 5,
6, 1, 1, 3, 4, 6, 1, 6, 1, 6, 1, 6, 3, 0, 1, 1, 1, 1, 1, 5, 1, 6, 3, 3,
6, 3, 1, 6, 1, 5, 1, 3, 1, 1, 1, 6, 2, 1, 1, 1, 6, 0, 1, 1, 2, 6, 4, 5,
5, 2, 0, 1, 5, 2, 0, 1, 0, 0, 5, 0, 4, 1, 0, 5, 4, 1, 6, 1, 2, 0, 0, 1,
4, 5, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 3, 0, 0, 2, 2, 3, 2, 2, 6, 2, 0, 5,
1, 2, 2, 2, 2, 5, 1, 2, 2, 2, 1, 5, 2, 6, 2, 2, 2, 0, 2, 2, 2, 0, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 5, 0, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 5, 0, 2, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 6, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([1, 1, 1, 5, 1, 1, 1, 5, 6, 5, 0, 5, 1, 1, 5, 4, 1, 1, 1, 5, 1, 2, 1, 5,
5, 1, 4, 1, 1, 2, 1, 0, 1, 1, 1, 5, 1, 2, 5, 4, 6, 5, 1, 3, 1, 6, 1, 1,
1, 1, 5, 6, 3, 1, 3, 1, 1, 0, 1, 5, 1, 1, 1, 1, 1, 1, 0, 1, 6, 0, 1, 1,
0, 0, 1, 3, 1, 6, 1, 5, 1, 5, 1, 2, 1, 1, 5, 5, 1, 1, 1, 1, 1, 6, 5, 1,
1, 1, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 6, 1, 1, 1, 4, 1, 4, 4, 4, 4, 1, 1, 5, 1, 1, 1, 1, 4, 4,
4, 4, 4, 1, 1, 4, 4, 6, 4, 4, 4, 4, 4, 6, 1, 2, 1, 1, 6, 1, 4, 1, 4, 1,
4, 4, 1, 0, 6, 4, 3, 1, 1, 4, 0, 6, 1, 1, 4, 4, 6, 3, 1, 4, 4, 4, 4, 3,
1, 1, 4, 4, 1, 4, 6, 5, 1, 1, 4, 4, 4, 1, 6, 4, 1, 0, 0, 1, 1, 4, 6, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 3, 5, 1, 1, 5, 1, 3, 5, 1, 5, 0, 3, 1, 0, 1, 1, 5, 1, 6, 5, 5,
1, 5, 5, 1, 0, 1, 1, 6, 5, 0, 1, 1, 1, 3, 1, 5, 1, 1, 1, 1, 1, 1, 5, 0,
5, 5, 5, 1, 5, 1, 0, 5, 1, 1, 1, 0, 5, 5, 2, 5, 5, 5, 1, 1, 1, 1, 5, 5,
5, 1, 5, 5, 2, 4, 5, 5, 1, 1, 5, 1, 1, 5, 1, 1, 1, 5, 5, 5, 1, 5, 4, 5,
0, 1, 5, 1], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 5, 1, 2, 1, 3, 0, 1, 1, 1, 6, 3, 6, 2, 1, 1, 0, 1, 6, 4, 1, 1, 6,
2, 6, 2, 6, 3, 0, 6, 1, 3, 6, 6, 1, 1, 2, 5, 4, 6, 1, 6, 0, 6, 1, 0, 1,
1, 0, 2, 1, 1, 6, 3, 6, 0, 1, 5, 2, 5, 4, 3, 6, 0, 1, 1, 6, 1, 1, 5, 6,
1, 1, 2, 1, 0, 6, 1, 1, 6, 1, 0, 0, 1, 3, 5, 5, 6, 1, 6, 1, 2, 6, 5, 1,
2, 3, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s708ms | Tot: 10s211ms | Loss: 3.879 | Acc: 37.571% (263/700) 7/7
Epoch: 38
[==============================================================>..] Step: 787ms | Tot: 58s614ms | Loss: 0.048 | Acc: 98.743% (3456/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 0, 3, 0, 0, 0, 0, 3, 0, 0, 0, 1, 0, 0, 5, 2, 0, 2, 0, 3, 1, 0, 1,
5, 1, 4, 6, 0, 0, 0, 6, 0, 5, 6, 0, 0, 0, 0, 3, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 0, 0, 0, 1, 4, 0, 2, 0, 3, 0, 5, 0, 1, 0, 0, 3, 0, 0, 0, 0, 0,
2, 2, 0, 5, 1, 1, 0, 2, 0, 0, 2, 1, 5, 0, 2, 3, 0, 0, 0, 0, 0, 0, 5, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 0, 0, 5, 1, 3, 1, 4, 1, 5, 4, 1, 1, 0, 5, 1, 1, 0, 6, 0, 0, 0,
6, 0, 0, 0, 0, 6, 0, 0, 0, 0, 0, 6, 0, 2, 2, 0, 0, 0, 0, 0, 1, 6, 3, 3,
0, 2, 0, 0, 1, 0, 0, 6, 0, 1, 1, 2, 2, 0, 0, 6, 6, 0, 0, 3, 3, 6, 4, 5,
5, 2, 2, 1, 5, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 4, 0, 6, 0, 0, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 1, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 1, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 0, 1, 2, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 0, 5, 1, 1, 1, 5, 6, 2, 0, 6, 5, 0, 0, 4, 0, 0, 1, 5, 4, 0, 0, 0,
5, 5, 4, 1, 1, 2, 1, 0, 4, 1, 1, 0, 1, 2, 5, 4, 0, 1, 1, 0, 6, 5, 1, 6,
5, 5, 5, 6, 3, 0, 3, 0, 1, 0, 1, 0, 0, 5, 2, 1, 1, 0, 0, 1, 1, 0, 0, 0,
3, 0, 0, 4, 0, 6, 5, 5, 0, 0, 0, 2, 5, 0, 0, 3, 3, 0, 0, 1, 5, 6, 5, 1,
0, 0, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 1, 6, 1, 1, 4, 4, 6, 4, 4, 4, 4, 3, 4, 5, 1, 6, 0, 1, 4, 6,
4, 4, 4, 3, 6, 4, 4, 6, 4, 4, 4, 4, 6, 6, 4, 2, 5, 0, 6, 1, 4, 1, 3, 1,
2, 4, 1, 6, 6, 4, 4, 6, 0, 6, 0, 5, 0, 0, 1, 4, 4, 0, 4, 4, 4, 4, 5, 5,
0, 0, 4, 4, 1, 4, 3, 0, 0, 1, 4, 4, 4, 2, 0, 1, 1, 0, 0, 0, 1, 1, 2, 4,
0, 0, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 0, 1, 1, 5, 5, 0, 5, 1, 5, 2, 2, 1, 0, 0, 0, 5, 0, 0, 5, 5,
0, 5, 5, 1, 0, 1, 5, 0, 5, 0, 0, 3, 5, 5, 1, 5, 1, 3, 1, 5, 5, 5, 5, 0,
5, 0, 5, 1, 0, 1, 0, 5, 1, 0, 0, 0, 5, 5, 2, 3, 5, 5, 1, 0, 1, 1, 1, 5,
5, 5, 5, 5, 2, 1, 0, 0, 0, 0, 1, 5, 5, 5, 0, 3, 1, 0, 5, 5, 1, 1, 1, 5,
0, 1, 1, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 1, 0, 0, 2, 1, 0, 3, 6, 2, 0, 1, 0, 1, 6, 4, 1, 1, 6,
2, 1, 2, 1, 0, 0, 6, 2, 1, 6, 6, 5, 5, 2, 0, 4, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 1, 1, 1, 6, 0, 6, 0, 0, 1, 0, 1, 6, 0, 0, 0, 0, 6, 1, 1, 1, 1, 6,
1, 0, 2, 0, 0, 6, 2, 2, 6, 2, 0, 0, 0, 3, 3, 6, 0, 3, 0, 1, 2, 6, 2, 0,
2, 4, 1, 6], device='cuda:0')
[=======================================================>.........] Step: 1s693ms | Tot: 10s171ms | Loss: 3.666 | Acc: 36.571% (256/700) 7/7
Epoch: 39
[==============================================================>..] Step: 789ms | Tot: 58s22ms | Loss: 0.037 | Acc: 98.943% (3463/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 1, 0, 3, 0, 0, 0, 0, 4, 0, 0, 3, 0, 0, 0, 3, 3, 0, 1, 1, 3, 1, 5, 1,
5, 1, 4, 6, 0, 0, 0, 6, 0, 0, 6, 0, 5, 6, 0, 0, 2, 0, 3, 3, 6, 1, 0, 1,
5, 4, 1, 0, 0, 1, 1, 4, 0, 2, 0, 0, 0, 3, 0, 1, 1, 0, 5, 0, 0, 0, 3, 0,
2, 0, 0, 5, 1, 1, 0, 5, 0, 6, 1, 1, 5, 4, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 3, 1, 3, 1, 4, 4, 1, 4, 1, 5, 6, 5, 1, 3, 4, 1, 0, 1, 0,
6, 4, 0, 1, 4, 6, 0, 1, 1, 6, 0, 6, 0, 5, 5, 0, 1, 0, 0, 5, 1, 6, 3, 3,
6, 2, 1, 5, 1, 1, 1, 6, 1, 1, 1, 6, 2, 0, 1, 1, 2, 0, 0, 3, 3, 6, 4, 5,
5, 2, 0, 1, 5, 0, 5, 1, 0, 0, 5, 0, 4, 1, 0, 1, 4, 6, 0, 4, 2, 0, 0, 3,
1, 0, 5, 6], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([5, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 0, 2, 0, 2, 2, 3, 2, 2, 6, 1, 2, 5,
1, 2, 2, 2, 2, 2, 6, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 0,
2, 2, 2, 1, 2, 2, 1, 2, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 6, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 6, 5, 1, 3, 3, 5, 6, 5, 0, 6, 1, 0, 5, 4, 0, 1, 1, 3, 1, 0, 0, 5,
5, 5, 4, 1, 1, 0, 1, 0, 4, 1, 1, 5, 1, 5, 5, 4, 0, 5, 1, 0, 6, 5, 1, 1,
5, 1, 5, 6, 3, 0, 3, 5, 1, 0, 1, 5, 1, 1, 4, 1, 1, 6, 6, 1, 1, 0, 6, 1,
3, 0, 1, 3, 6, 6, 1, 5, 3, 1, 1, 2, 1, 1, 3, 0, 1, 1, 1, 1, 1, 6, 5, 1,
5, 5, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 1, 4, 3, 6, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 1, 5, 4, 0, 4, 6,
4, 4, 4, 6, 6, 4, 4, 6, 4, 4, 4, 4, 4, 4, 4, 4, 1, 1, 6, 3, 4, 5, 4, 1,
4, 4, 1, 1, 6, 4, 4, 1, 1, 4, 5, 6, 3, 3, 5, 4, 6, 1, 4, 4, 6, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 4, 0, 1, 1, 4, 6, 6,
0, 4, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 5, 1, 5, 1, 3, 5, 5, 5, 2, 2, 1, 5, 0, 1, 5, 1, 6, 5, 5,
5, 5, 5, 1, 0, 5, 1, 6, 5, 0, 0, 1, 1, 5, 5, 5, 5, 1, 1, 5, 3, 5, 5, 0,
5, 3, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 5, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 2, 0, 5, 1, 5, 1, 5, 1, 3, 1, 5, 5, 5, 5, 5, 1, 5,
0, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 6, 2, 1, 3, 2, 1, 1, 1, 6, 3, 6, 2, 0, 1, 4, 3, 6, 4, 6, 5, 6,
6, 6, 0, 4, 0, 5, 6, 2, 3, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 1, 6, 1, 6, 5,
1, 0, 1, 1, 1, 6, 3, 6, 1, 6, 5, 2, 3, 1, 3, 6, 0, 6, 6, 1, 3, 1, 5, 6,
1, 4, 2, 1, 2, 6, 5, 3, 6, 5, 0, 0, 1, 0, 3, 6, 5, 1, 6, 1, 4, 6, 0, 1,
2, 6, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s703ms | Tot: 10s226ms | Loss: 3.395 | Acc: 40.429% (283/700) 7/7
Saving..
Epoch: 40
[==============================================================>..] Step: 763ms | Tot: 58s174ms | Loss: 0.016 | Acc: 99.743% (3491/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 3, 4, 0, 0, 3, 0, 1, 3, 1, 0, 0, 3, 2, 0, 2, 1, 3, 0, 2, 1,
5, 1, 4, 0, 0, 0, 2, 6, 0, 0, 1, 0, 6, 0, 0, 2, 2, 0, 3, 3, 6, 0, 0, 1,
5, 0, 1, 0, 0, 0, 1, 4, 0, 2, 0, 2, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 0, 0,
2, 0, 0, 5, 1, 1, 0, 5, 0, 0, 2, 1, 5, 0, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 1, 4, 1, 3, 4, 5, 1, 3, 4, 4, 0, 1, 0,
6, 4, 0, 3, 4, 6, 1, 0, 1, 6, 0, 6, 0, 5, 2, 1, 1, 0, 0, 0, 1, 6, 3, 3,
6, 2, 1, 6, 1, 1, 1, 0, 0, 1, 2, 2, 2, 0, 1, 4, 6, 0, 0, 1, 3, 4, 4, 5,
5, 2, 0, 1, 5, 2, 6, 1, 0, 0, 0, 0, 4, 1, 0, 1, 4, 6, 0, 4, 2, 0, 0, 3,
6, 0, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 6, 2, 2, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 5, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 1, 5, 6, 5, 0, 6, 4, 3, 0, 4, 0, 1, 1, 3, 4, 0, 2, 5,
0, 5, 4, 1, 1, 2, 1, 0, 4, 1, 1, 1, 1, 4, 5, 4, 0, 5, 1, 3, 1, 4, 1, 1,
5, 5, 5, 6, 3, 1, 4, 4, 1, 0, 1, 5, 1, 1, 2, 1, 1, 3, 6, 1, 6, 0, 1, 1,
3, 0, 1, 4, 3, 6, 1, 5, 0, 1, 1, 2, 1, 2, 0, 4, 4, 1, 1, 1, 1, 6, 5, 0,
5, 5, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 4, 4, 4, 4, 4, 1, 1, 3, 4, 1, 4, 4, 4, 4, 1, 3, 5, 1, 6, 4, 1, 4, 4,
4, 4, 4, 3, 6, 4, 4, 6, 4, 4, 4, 4, 4, 6, 4, 2, 1, 5, 6, 5, 4, 5, 4, 1,
4, 4, 1, 6, 6, 4, 4, 3, 2, 4, 4, 6, 5, 5, 5, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 0, 4, 1, 4, 4, 4, 2, 6, 4, 6, 0, 0, 1, 1, 4, 3, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 0, 1, 1, 5, 5, 3, 5, 3, 5, 2, 2, 5, 0, 1, 1, 5, 1, 6, 5, 5,
0, 5, 5, 1, 0, 5, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 1, 1, 1, 4, 5, 5, 5, 0,
5, 3, 5, 1, 1, 1, 0, 5, 1, 0, 0, 0, 5, 5, 2, 3, 5, 5, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 6, 2, 1, 5, 1, 5, 0, 3, 1, 3, 5, 5, 1, 5, 4, 5,
0, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 3, 2, 1, 1, 1, 0, 3, 6, 2, 0, 1, 4, 3, 6, 4, 6, 3, 6,
2, 6, 0, 2, 3, 0, 6, 2, 3, 6, 6, 1, 5, 2, 2, 4, 6, 4, 6, 0, 6, 1, 0, 3,
1, 0, 4, 1, 1, 6, 3, 6, 0, 6, 1, 2, 5, 4, 3, 6, 0, 0, 1, 6, 1, 1, 5, 6,
1, 1, 2, 0, 0, 6, 5, 2, 6, 1, 0, 0, 1, 3, 3, 5, 0, 3, 0, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s750ms | Tot: 10s231ms | Loss: 3.552 | Acc: 40.429% (283/700) 7/7
Epoch: 41
[==============================================================>..] Step: 824ms | Tot: 58s265ms | Loss: 0.014 | Acc: 99.657% (3488/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 1, 0, 3, 0, 4, 0, 0, 6, 0, 3, 3, 1, 0, 0, 3, 3, 0, 2, 1, 3, 1, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 0, 0, 0, 0, 2, 0, 3, 3, 6, 0, 0, 1,
5, 0, 1, 0, 2, 1, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 1, 2, 0, 0, 3, 0,
2, 0, 0, 1, 1, 1, 0, 1, 0, 0, 2, 1, 5, 4, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 4, 1, 5, 0, 3, 1, 3, 1, 4, 4, 1, 4, 1, 3, 4, 5, 1, 3, 4, 6, 0, 1, 1,
6, 4, 0, 5, 4, 6, 0, 0, 0, 6, 0, 6, 0, 5, 0, 0, 1, 1, 0, 0, 1, 6, 3, 3,
6, 2, 1, 6, 1, 1, 4, 6, 1, 1, 2, 6, 2, 0, 1, 4, 6, 0, 0, 3, 3, 1, 4, 5,
5, 2, 0, 1, 5, 2, 6, 1, 0, 0, 5, 0, 4, 1, 0, 4, 6, 6, 0, 4, 2, 0, 0, 3,
6, 0, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 3, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 5, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 3, 1, 5, 6, 5, 0, 6, 1, 3, 5, 4, 0, 1, 1, 3, 1, 1, 1, 1,
5, 5, 4, 1, 1, 2, 3, 0, 4, 1, 3, 1, 1, 1, 5, 4, 1, 5, 1, 3, 1, 6, 1, 1,
1, 5, 5, 6, 3, 1, 3, 5, 1, 0, 1, 5, 3, 1, 2, 3, 1, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 5, 5, 3, 1, 1, 2, 1, 1, 0, 0, 1, 1, 1, 1, 1, 6, 5, 0,
5, 5, 3, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 6, 4, 3, 6, 1, 1, 3, 4, 6, 4, 4, 4, 6, 3, 3, 5, 1, 1, 3, 1, 4, 4,
4, 4, 4, 3, 6, 4, 4, 6, 4, 4, 4, 4, 4, 6, 4, 2, 1, 3, 6, 3, 4, 1, 4, 1,
4, 4, 1, 6, 6, 4, 4, 3, 1, 4, 4, 6, 3, 3, 5, 4, 4, 5, 4, 4, 6, 4, 4, 5,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 2, 6, 4, 1, 0, 0, 1, 1, 4, 3, 4,
3, 4, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 0, 1, 5, 5, 1, 5, 5,
5, 5, 5, 1, 0, 5, 1, 0, 5, 2, 0, 1, 1, 5, 5, 5, 1, 1, 1, 5, 3, 5, 5, 0,
5, 3, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 3, 5, 2, 3, 5, 5, 1, 1, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 5, 3, 1, 5, 1, 5, 0, 3, 1, 3, 5, 5, 1, 5, 4, 5,
2, 5, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 3, 2, 1, 1, 1, 6, 3, 6, 2, 0, 1, 4, 3, 6, 4, 1, 1, 6,
2, 6, 0, 4, 3, 0, 3, 2, 3, 6, 6, 5, 1, 2, 2, 1, 6, 4, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 3, 6, 0, 6, 5, 2, 5, 6, 3, 6, 0, 3, 1, 1, 3, 1, 4, 6,
1, 3, 2, 1, 0, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 0, 3, 0, 2, 2, 4,
2, 6, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s709ms | Tot: 10s238ms | Loss: 3.626 | Acc: 39.571% (277/700) 7/7
Epoch: 42
[==============================================================>..] Step: 831ms | Tot: 58s425ms | Loss: 0.016 | Acc: 99.686% (3489/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 4, 3, 1, 4, 0, 0, 6, 0, 1, 1, 1, 0, 0, 3, 2, 0, 2, 1, 0, 1, 2, 1,
5, 1, 4, 6, 0, 0, 2, 4, 0, 0, 1, 0, 0, 0, 0, 4, 2, 0, 3, 3, 6, 1, 0, 3,
5, 0, 0, 0, 0, 4, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 2, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 0, 1, 1, 0, 6, 0, 0, 1, 0, 5, 4, 0, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 1, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([4, 4, 1, 5, 0, 3, 4, 3, 1, 4, 4, 5, 4, 1, 5, 0, 5, 4, 1, 6, 4, 0, 1, 0,
6, 4, 0, 4, 3, 6, 0, 0, 0, 6, 0, 6, 6, 5, 6, 1, 1, 0, 0, 5, 1, 6, 3, 3,
4, 2, 5, 5, 1, 0, 0, 6, 1, 1, 2, 6, 2, 0, 0, 4, 2, 0, 0, 4, 6, 1, 4, 5,
5, 2, 2, 1, 5, 3, 0, 4, 0, 0, 5, 0, 6, 1, 0, 4, 2, 6, 6, 0, 2, 0, 0, 1,
4, 5, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 1, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 6, 2, 2, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 5, 2, 2, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 6, 5, 1, 6, 1, 5, 6, 2, 0, 6, 5, 1, 0, 4, 4, 1, 1, 5, 4, 2, 6, 5,
5, 5, 4, 1, 1, 4, 1, 0, 4, 1, 1, 0, 1, 4, 5, 4, 6, 5, 1, 3, 1, 4, 1, 1,
5, 5, 5, 6, 3, 1, 2, 5, 1, 0, 1, 5, 1, 1, 2, 1, 6, 6, 0, 1, 6, 0, 1, 1,
3, 0, 6, 4, 4, 6, 1, 5, 3, 2, 6, 2, 1, 1, 0, 4, 4, 1, 1, 1, 1, 6, 5, 1,
5, 1, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 3, 4, 1, 1, 4, 4, 6, 4, 4, 4, 4, 3, 4, 5, 1, 6, 4, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 6, 4, 4, 4, 4, 4, 4, 4, 2, 5, 5, 4, 5, 4, 5, 4, 1,
4, 4, 1, 6, 6, 4, 4, 4, 1, 6, 4, 6, 3, 0, 4, 4, 6, 3, 4, 4, 6, 4, 4, 5,
3, 1, 4, 4, 1, 4, 6, 6, 4, 4, 4, 4, 4, 2, 6, 4, 1, 4, 0, 4, 5, 4, 2, 6,
1, 4, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 6, 0, 5, 1, 5, 1, 0, 5, 3, 5, 2, 2, 1, 1, 0, 5, 5, 5, 6, 5, 5,
0, 5, 5, 1, 0, 0, 5, 0, 5, 2, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 3, 5, 5, 5, 1, 0, 5, 1, 1, 4, 0, 5, 5, 2, 3, 5, 5, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 2, 1, 3, 1, 5, 1, 5, 0, 1, 1, 3, 5, 5, 1, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 6, 1, 2, 1, 3, 0, 1, 2, 1, 6, 3, 6, 2, 0, 1, 5, 1, 0, 4, 1, 6, 6,
2, 4, 0, 4, 0, 6, 6, 2, 0, 6, 4, 5, 5, 2, 0, 4, 6, 1, 6, 0, 6, 2, 0, 3,
1, 0, 2, 1, 1, 6, 4, 6, 0, 0, 5, 2, 4, 6, 1, 0, 0, 0, 6, 4, 1, 1, 5, 6,
4, 1, 2, 6, 6, 6, 6, 2, 6, 6, 1, 0, 2, 6, 3, 6, 0, 3, 6, 1, 4, 2, 2, 4,
2, 6, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s734ms | Tot: 10s170ms | Loss: 3.565 | Acc: 40.000% (280/700) 7/7
Epoch: 43
[==============================================================>..] Step: 801ms | Tot: 58s378ms | Loss: 0.009 | Acc: 99.829% (3494/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 1, 3, 0, 4, 0, 0, 6, 0, 1, 1, 1, 0, 0, 1, 2, 0, 2, 1, 0, 1, 5, 1,
5, 1, 4, 6, 0, 0, 2, 4, 0, 0, 1, 0, 5, 0, 0, 0, 2, 0, 3, 3, 6, 0, 0, 1,
1, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 0, 0,
2, 0, 0, 5, 1, 1, 0, 1, 0, 0, 2, 1, 5, 5, 2, 3, 0, 0, 0, 5, 0, 0, 5, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 2, 5, 0, 4, 1, 3, 1, 4, 1, 2, 4, 1, 5, 1, 5, 1, 1, 4, 6, 6, 1, 0,
6, 4, 0, 5, 3, 6, 0, 0, 1, 6, 0, 6, 0, 5, 6, 1, 1, 0, 5, 5, 1, 6, 0, 3,
6, 2, 1, 6, 1, 0, 4, 6, 0, 1, 2, 2, 2, 0, 1, 4, 2, 0, 0, 1, 2, 1, 4, 5,
5, 2, 0, 1, 5, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 0, 6, 6, 4, 2, 0, 0, 1,
6, 5, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 6, 2, 2, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 5, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 6, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 3, 5, 5, 6, 1, 5, 6, 5, 0, 6, 4, 1, 5, 4, 0, 1, 1, 5, 4, 2, 2, 0,
5, 5, 4, 1, 1, 3, 1, 0, 4, 1, 1, 5, 1, 0, 5, 4, 1, 5, 1, 3, 1, 5, 1, 1,
5, 1, 5, 6, 3, 0, 2, 5, 1, 0, 1, 5, 1, 5, 2, 1, 6, 6, 0, 5, 6, 0, 1, 1,
0, 0, 1, 4, 4, 6, 1, 5, 0, 1, 1, 2, 1, 1, 0, 0, 1, 0, 1, 1, 1, 6, 5, 1,
5, 0, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 6, 1, 1, 4, 0, 6, 4, 4, 4, 6, 3, 3, 5, 5, 5, 4, 1, 4, 4,
4, 4, 4, 3, 6, 4, 4, 6, 4, 4, 4, 4, 4, 6, 4, 2, 5, 5, 6, 1, 4, 5, 4, 1,
4, 4, 1, 6, 6, 4, 4, 1, 2, 4, 0, 1, 1, 0, 5, 4, 6, 5, 4, 4, 4, 4, 4, 5,
5, 1, 4, 4, 1, 4, 6, 6, 1, 1, 4, 4, 4, 2, 5, 4, 6, 4, 0, 1, 1, 4, 0, 4,
1, 4, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 1, 1, 1, 5, 1, 3, 5, 3, 5, 2, 2, 1, 0, 1, 5, 5, 5, 6, 5, 5,
5, 5, 5, 1, 0, 0, 5, 6, 5, 0, 0, 5, 1, 5, 5, 5, 5, 5, 3, 5, 5, 5, 5, 0,
5, 3, 5, 1, 5, 1, 0, 5, 1, 0, 0, 0, 5, 5, 2, 3, 5, 5, 5, 1, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 2, 0, 3, 1, 5, 5, 5, 5, 1, 1, 3, 5, 5, 1, 5, 5, 5,
2, 1, 0, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 3, 0, 1, 2, 1, 6, 3, 6, 2, 0, 1, 4, 1, 6, 4, 1, 1, 6,
2, 6, 0, 2, 0, 0, 6, 2, 0, 6, 6, 1, 5, 2, 2, 1, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 1, 6, 0, 0, 5, 2, 4, 4, 1, 6, 0, 0, 1, 6, 1, 1, 5, 6,
2, 1, 2, 1, 0, 6, 6, 1, 6, 5, 0, 0, 1, 6, 3, 5, 0, 3, 6, 1, 2, 2, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s684ms | Tot: 10s150ms | Loss: 3.541 | Acc: 40.143% (281/700) 7/7
Epoch: 44
[==============================================================>..] Step: 807ms | Tot: 58s205ms | Loss: 0.008 | Acc: 99.886% (3496/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 3, 3, 0, 4, 0, 0, 3, 0, 1, 1, 0, 0, 0, 1, 2, 0, 2, 1, 3, 3, 0, 1,
5, 5, 4, 6, 0, 0, 2, 4, 0, 0, 1, 0, 5, 0, 0, 0, 2, 0, 3, 3, 6, 1, 0, 3,
5, 1, 1, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 3, 0, 0, 0, 3, 0,
2, 5, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 5, 5, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
5, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([4, 4, 5, 1, 0, 3, 1, 3, 1, 4, 1, 1, 4, 0, 5, 4, 5, 1, 1, 4, 6, 0, 1, 1,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 5, 6, 1, 1, 0, 4, 0, 1, 6, 3, 3,
1, 2, 5, 0, 1, 1, 1, 6, 0, 1, 1, 6, 2, 0, 1, 1, 6, 0, 0, 1, 2, 6, 4, 5,
5, 2, 0, 1, 5, 2, 4, 1, 0, 0, 5, 0, 4, 5, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 5, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 5, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 1, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 3, 5, 5, 3, 1, 5, 6, 1, 0, 3, 4, 0, 5, 4, 1, 1, 5, 5, 4, 0, 1, 0,
5, 5, 4, 1, 1, 3, 1, 0, 4, 1, 1, 1, 1, 0, 5, 4, 1, 5, 1, 3, 1, 4, 1, 1,
5, 5, 5, 6, 3, 1, 2, 5, 1, 0, 1, 5, 1, 1, 2, 1, 5, 6, 0, 1, 6, 0, 1, 4,
3, 5, 1, 4, 3, 6, 5, 5, 3, 1, 1, 2, 1, 1, 0, 0, 1, 0, 1, 1, 1, 6, 5, 1,
5, 5, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 4, 4, 4, 4, 6, 1, 1, 3, 4, 6, 4, 4, 4, 4, 3, 3, 5, 1, 3, 4, 1, 4, 4,
4, 4, 4, 3, 6, 4, 4, 6, 4, 4, 4, 4, 4, 4, 4, 2, 1, 1, 4, 5, 4, 5, 4, 1,
4, 4, 1, 0, 4, 4, 4, 3, 2, 4, 4, 1, 1, 1, 3, 4, 4, 5, 4, 4, 4, 4, 4, 1,
1, 5, 4, 4, 1, 4, 3, 5, 1, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 1, 4, 2, 4,
1, 4, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 2, 1, 5, 1, 5, 5, 5, 6, 5, 5,
5, 5, 5, 1, 0, 0, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 3, 3, 5, 1, 5, 5, 0,
5, 3, 5, 5, 5, 5, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 5, 1, 1, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 3, 5, 5, 5, 3, 1, 1, 1, 3, 5, 5, 5, 5, 4, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 1, 0, 1, 1, 1, 6, 3, 6, 2, 0, 1, 5, 3, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 5, 6, 2, 0, 6, 4, 1, 5, 2, 3, 1, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 1, 2, 4, 6, 1, 6, 0, 1, 1, 1, 1, 1, 5, 6,
1, 1, 2, 1, 2, 6, 6, 1, 6, 1, 0, 0, 1, 6, 3, 6, 0, 3, 6, 1, 2, 2, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s696ms | Tot: 10s190ms | Loss: 3.503 | Acc: 41.143% (288/700) 7/7
Saving..
Epoch: 45
[==============================================================>..] Step: 817ms | Tot: 58s176ms | Loss: 0.019 | Acc: 99.629% (3487/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 3, 3, 0, 0, 0, 0, 3, 0, 6, 3, 0, 0, 0, 3, 2, 0, 0, 0, 3, 0, 0, 1,
5, 1, 0, 6, 0, 0, 2, 6, 0, 0, 6, 0, 6, 0, 0, 2, 2, 0, 3, 3, 0, 0, 0, 1,
5, 0, 1, 6, 0, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 6, 0, 1, 2, 0, 0, 2, 0,
2, 2, 0, 1, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 6], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 3, 1, 0, 3, 1, 3, 1, 4, 1, 2, 6, 1, 3, 0, 5, 1, 3, 4, 4, 0, 1, 3,
6, 0, 0, 3, 4, 6, 0, 6, 1, 6, 3, 6, 0, 0, 5, 1, 1, 0, 0, 5, 1, 6, 3, 3,
6, 2, 1, 0, 1, 1, 1, 6, 0, 1, 2, 2, 2, 0, 1, 5, 2, 0, 0, 3, 6, 6, 4, 2,
5, 6, 0, 0, 0, 2, 0, 4, 0, 0, 0, 0, 6, 1, 0, 1, 6, 0, 0, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 3, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 6, 2, 2, 2, 3, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 3, 2, 2, 2, 2, 2, 3,
2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2,
2, 2, 6, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 3, 3, 1, 6, 3, 5, 6, 5, 0, 6, 5, 6, 5, 4, 3, 1, 1, 3, 4, 0, 2, 0,
5, 1, 6, 1, 1, 3, 3, 0, 4, 1, 1, 3, 1, 6, 5, 4, 3, 4, 1, 3, 6, 6, 1, 6,
1, 6, 5, 6, 3, 0, 3, 1, 1, 0, 1, 5, 0, 1, 2, 3, 6, 6, 6, 1, 6, 0, 1, 4,
3, 0, 0, 3, 6, 6, 5, 5, 3, 2, 6, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 0,
5, 0, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 6, 4, 4, 6, 1, 1, 3, 6, 6, 4, 6, 4, 6, 3, 3, 5, 1, 3, 3, 3, 4, 6,
6, 4, 4, 3, 6, 4, 4, 6, 4, 4, 4, 5, 6, 6, 4, 2, 1, 1, 6, 3, 4, 1, 6, 1,
4, 4, 1, 0, 6, 6, 3, 6, 2, 6, 0, 6, 3, 1, 5, 4, 6, 3, 4, 4, 6, 4, 4, 3,
3, 0, 4, 4, 1, 4, 6, 6, 1, 0, 1, 4, 4, 6, 6, 4, 6, 0, 0, 1, 1, 4, 0, 6,
5, 6, 4, 6], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 2, 6, 3, 1, 0, 5, 1, 3, 2, 2, 1, 2, 6, 1, 0, 1, 0, 1, 6, 0, 5, 5,
5, 5, 5, 1, 0, 0, 1, 6, 5, 0, 0, 6, 1, 5, 1, 5, 5, 5, 1, 5, 1, 5, 5, 0,
5, 3, 1, 5, 1, 1, 0, 5, 1, 1, 0, 0, 3, 5, 2, 3, 5, 2, 0, 4, 1, 1, 1, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 2, 1, 1, 1, 6, 1, 3, 1, 3, 5, 5, 5, 5, 4, 5,
2, 5, 1, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 6, 3, 1, 2, 1, 3, 0, 0, 1, 0, 6, 3, 6, 2, 0, 1, 4, 3, 6, 4, 6, 1, 6,
6, 6, 0, 6, 0, 0, 6, 2, 0, 6, 6, 5, 1, 5, 3, 4, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 6, 6, 3, 6, 0, 6, 4, 2, 1, 6, 0, 6, 0, 3, 1, 6, 3, 1, 5, 6,
1, 3, 6, 6, 0, 6, 6, 2, 6, 6, 0, 0, 1, 0, 3, 6, 0, 3, 6, 1, 0, 6, 2, 3,
2, 3, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s755ms | Tot: 10s251ms | Loss: 3.508 | Acc: 37.571% (263/700) 7/7
Epoch: 46
[==============================================================>..] Step: 846ms | Tot: 58s282ms | Loss: 0.082 | Acc: 97.371% (3408/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 1, 3, 0, 0, 0, 0, 3, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 3, 1, 2, 1,
5, 5, 4, 6, 0, 0, 0, 4, 2, 0, 4, 0, 1, 0, 0, 5, 0, 0, 3, 3, 0, 1, 0, 1,
5, 0, 1, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 0, 1, 0, 3, 2, 0, 0, 3, 0,
2, 2, 0, 5, 5, 1, 0, 2, 0, 6, 1, 0, 5, 1, 2, 3, 3, 0, 0, 2, 0, 0, 4, 0,
5, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 5, 5, 0, 3, 1, 1, 1, 4, 4, 5, 4, 1, 5, 0, 5, 1, 1, 4, 6, 0, 1, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 6, 5, 1, 1, 1, 0, 4, 5, 1, 6, 3, 3,
6, 2, 5, 0, 1, 0, 1, 0, 0, 1, 6, 2, 2, 1, 1, 4, 2, 2, 0, 1, 6, 6, 4, 5,
5, 2, 0, 1, 0, 5, 0, 1, 1, 0, 5, 0, 4, 4, 6, 4, 6, 5, 0, 4, 2, 0, 0, 1,
1, 5, 5, 5], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([0, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 1, 2, 0, 2, 2, 3, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 3, 2, 2, 2, 0, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 3, 5, 5, 3, 5, 5, 6, 5, 0, 4, 1, 0, 5, 4, 0, 0, 5, 5, 4, 2, 5, 5,
5, 5, 4, 5, 1, 4, 1, 0, 4, 5, 3, 1, 1, 4, 5, 4, 0, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 5, 3, 1, 1, 0, 1, 5, 0, 5, 2, 1, 5, 0, 0, 1, 6, 5, 0, 4,
3, 0, 1, 4, 4, 6, 5, 5, 5, 0, 1, 2, 1, 0, 0, 5, 4, 5, 0, 1, 5, 6, 5, 3,
0, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 4, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 3, 1, 5, 1, 4, 1, 0, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 4, 5, 4, 4, 3, 1,
4, 4, 1, 0, 6, 4, 4, 4, 5, 4, 4, 6, 1, 1, 5, 4, 6, 3, 4, 4, 6, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 4, 1, 1, 4, 4, 4, 6, 6, 4, 4, 0, 0, 4, 5, 4, 2, 4,
1, 0, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 5, 1, 5, 5, 3, 5, 5, 5, 2, 2, 1, 0, 0, 2, 5, 5, 6, 5, 5,
5, 5, 5, 1, 5, 0, 5, 6, 5, 2, 0, 0, 1, 5, 5, 5, 1, 5, 3, 5, 1, 5, 5, 0,
5, 5, 5, 5, 5, 5, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 5, 0, 6, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 2, 0, 1, 5, 5, 5, 5, 5, 3, 1, 5, 0, 5, 5, 5, 2, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 3, 1, 2, 1, 1, 0, 1, 1, 1, 5, 3, 6, 2, 0, 1, 5, 1, 6, 5, 1, 1, 6,
2, 6, 2, 4, 0, 6, 6, 2, 3, 6, 6, 6, 5, 2, 0, 4, 6, 1, 6, 1, 6, 2, 0, 3,
0, 0, 2, 1, 0, 6, 3, 6, 0, 0, 4, 2, 3, 4, 3, 6, 0, 0, 6, 4, 1, 1, 5, 6,
5, 1, 2, 0, 0, 6, 2, 3, 6, 1, 0, 0, 1, 0, 3, 4, 0, 3, 3, 3, 4, 2, 0, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s756ms | Tot: 10s405ms | Loss: 3.543 | Acc: 41.429% (290/700) 7/7
Saving..
Epoch: 47
[==============================================================>..] Step: 780ms | Tot: 58s323ms | Loss: 0.075 | Acc: 97.571% (3415/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 5, 3, 0, 4, 0, 0, 3, 0, 3, 3, 1, 0, 0, 3, 6, 5, 2, 3, 3, 4, 1, 1,
5, 5, 4, 6, 6, 0, 2, 3, 1, 0, 1, 3, 5, 0, 0, 3, 2, 2, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 4, 4, 0, 2, 0, 3, 3, 3, 0, 1, 0, 0, 5, 3, 3, 6, 2, 3,
2, 6, 6, 1, 1, 1, 5, 1, 0, 6, 1, 1, 5, 4, 5, 3, 3, 0, 0, 1, 0, 4, 4, 0,
4, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 4, 1, 5, 2, 1, 1, 3, 1, 4, 4, 1, 4, 1, 5, 4, 5, 4, 1, 4, 4, 5, 1, 1,
6, 4, 1, 3, 4, 6, 0, 6, 1, 6, 3, 6, 3, 5, 1, 0, 6, 3, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 1, 4, 4, 1, 1, 1, 2, 2, 5, 1, 4, 1, 5, 4, 0, 5, 6, 4, 5,
5, 6, 0, 1, 4, 2, 6, 1, 0, 0, 0, 0, 4, 4, 6, 4, 6, 5, 0, 4, 2, 0, 0, 1,
6, 4, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 6, 2, 2, 2, 5, 5, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 1, 2, 2, 4, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 6, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([3, 4, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 5, 3, 5, 4, 5, 1, 5, 3, 1, 1, 2, 5,
5, 1, 4, 3, 1, 2, 5, 0, 4, 1, 3, 3, 1, 4, 5, 4, 1, 4, 1, 3, 3, 4, 3, 1,
1, 1, 5, 6, 3, 1, 4, 4, 1, 4, 1, 5, 3, 1, 6, 6, 1, 6, 6, 5, 6, 5, 6, 1,
3, 0, 1, 3, 6, 6, 6, 5, 3, 1, 1, 1, 1, 4, 0, 4, 1, 5, 1, 1, 1, 6, 5, 1,
5, 1, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 3, 4, 1, 4, 4, 4, 6, 1, 3, 5, 0, 3, 3, 5, 4, 4,
4, 4, 4, 6, 6, 4, 4, 6, 4, 4, 4, 4, 4, 4, 4, 3, 5, 1, 4, 1, 4, 1, 3, 1,
4, 4, 1, 5, 6, 4, 4, 4, 2, 4, 4, 1, 1, 2, 4, 4, 6, 3, 4, 4, 6, 4, 4, 1,
3, 3, 4, 4, 1, 4, 3, 5, 4, 1, 4, 4, 4, 2, 6, 4, 6, 4, 5, 4, 1, 4, 6, 4,
3, 4, 4, 5], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 3, 3, 1, 1, 5, 1, 3, 5, 2, 5, 2, 3, 1, 1, 4, 5, 5, 3, 6, 5, 5,
1, 5, 5, 5, 5, 0, 5, 6, 5, 2, 0, 5, 1, 3, 5, 5, 5, 5, 4, 4, 1, 5, 5, 3,
5, 3, 5, 1, 5, 5, 0, 5, 1, 1, 4, 1, 4, 5, 2, 5, 5, 2, 5, 4, 5, 6, 5, 5,
5, 5, 5, 5, 2, 4, 0, 1, 5, 3, 1, 5, 3, 5, 5, 3, 1, 5, 5, 5, 5, 5, 1, 5,
5, 1, 5, 1], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 3, 6, 1, 1, 1, 5, 3, 6, 2, 0, 3, 5, 3, 6, 4, 6, 3, 6,
1, 6, 2, 4, 0, 0, 6, 1, 3, 6, 6, 6, 5, 5, 2, 4, 6, 1, 6, 1, 6, 1, 6, 3,
1, 6, 1, 1, 1, 6, 3, 6, 0, 6, 4, 2, 4, 6, 1, 6, 0, 0, 6, 1, 1, 1, 5, 6,
5, 3, 2, 1, 6, 6, 6, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 3, 3, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s693ms | Tot: 10s244ms | Loss: 3.447 | Acc: 40.714% (285/700) 7/7
Epoch: 48
[==============================================================>..] Step: 760ms | Tot: 58s508ms | Loss: 0.054 | Acc: 98.200% (3437/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 1, 3, 3, 0, 1, 0, 5, 1, 1, 3, 1, 0, 0, 3, 3, 0, 2, 5, 3, 4, 0, 6,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 1, 0, 4, 2, 0, 0, 3, 6, 1, 0, 1,
5, 0, 0, 6, 0, 0, 1, 4, 0, 2, 0, 0, 0, 3, 0, 1, 0, 0, 5, 2, 0, 0, 3, 0,
0, 0, 0, 1, 1, 1, 5, 5, 5, 6, 1, 0, 5, 0, 0, 3, 3, 1, 0, 1, 0, 0, 5, 0,
2, 0, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 3, 1, 3, 1, 4, 1, 5, 4, 1, 5, 0, 5, 4, 1, 4, 4, 0, 0, 1,
3, 0, 1, 3, 3, 6, 0, 6, 0, 6, 3, 6, 3, 5, 6, 1, 1, 0, 5, 5, 1, 3, 3, 3,
6, 3, 1, 5, 1, 1, 1, 6, 6, 5, 1, 2, 2, 0, 1, 1, 6, 5, 0, 3, 3, 6, 4, 5,
5, 2, 0, 1, 1, 0, 6, 0, 0, 0, 0, 0, 4, 1, 6, 1, 6, 6, 0, 1, 1, 0, 0, 1,
6, 0, 5, 5], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 0, 0, 0, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 1, 2, 6, 1, 2, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 5, 2, 3, 2, 2, 2, 2, 2, 3,
2, 2, 2, 5, 2, 2, 4, 2, 2, 2, 2, 2, 2, 3, 2, 2, 0, 2, 3, 2, 2, 2, 2, 2,
2, 2, 1, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 1, 3, 5, 6, 1, 1, 5, 3, 4, 1, 5, 0,
5, 1, 6, 1, 1, 0, 3, 0, 4, 1, 1, 0, 1, 0, 5, 4, 0, 5, 1, 3, 5, 6, 1, 1,
5, 1, 5, 6, 3, 1, 3, 5, 1, 0, 1, 5, 3, 5, 6, 1, 6, 6, 0, 1, 6, 0, 6, 1,
3, 0, 6, 4, 3, 6, 5, 5, 1, 1, 1, 2, 1, 1, 0, 0, 1, 1, 5, 1, 5, 6, 5, 1,
5, 0, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 3, 4, 6, 1, 1, 4, 6, 4, 4, 4, 6, 3, 3, 5, 1, 1, 3, 3, 4, 6,
4, 4, 4, 3, 6, 4, 4, 6, 4, 4, 4, 4, 4, 6, 4, 3, 5, 0, 6, 5, 4, 1, 3, 1,
4, 4, 1, 6, 6, 4, 3, 3, 1, 4, 0, 6, 5, 3, 5, 4, 6, 3, 4, 4, 4, 4, 4, 6,
3, 1, 4, 4, 1, 4, 6, 5, 1, 6, 4, 4, 4, 2, 6, 1, 3, 4, 0, 1, 1, 1, 6, 4,
0, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 5, 3, 5, 0, 5, 2, 3, 1, 5, 0, 5, 5, 5, 4, 5, 5,
5, 5, 5, 1, 5, 0, 1, 0, 5, 2, 0, 6, 5, 3, 5, 5, 5, 5, 4, 5, 5, 5, 5, 3,
5, 3, 5, 3, 5, 5, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 5, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 6, 5, 0, 5, 3, 1, 5, 4, 5, 0, 3, 1, 1, 5, 5, 1, 5, 4, 5,
0, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 3, 2, 1, 1, 1, 6, 3, 6, 2, 0, 3, 0, 1, 6, 5, 6, 3, 6,
2, 6, 0, 4, 3, 5, 3, 2, 0, 6, 6, 5, 1, 5, 5, 1, 6, 1, 6, 0, 6, 1, 5, 0,
1, 0, 1, 1, 3, 6, 3, 6, 0, 6, 6, 2, 1, 6, 3, 6, 0, 3, 1, 6, 1, 1, 4, 6,
5, 3, 2, 1, 6, 6, 6, 3, 6, 6, 0, 0, 1, 3, 3, 6, 6, 3, 6, 3, 2, 6, 2, 3,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s710ms | Tot: 10s251ms | Loss: 3.562 | Acc: 39.429% (276/700) 7/7
Epoch: 49
[==============================================================>..] Step: 820ms | Tot: 58s314ms | Loss: 0.085 | Acc: 97.143% (3400/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([1, 6, 4, 3, 1, 6, 0, 1, 6, 1, 4, 3, 1, 0, 0, 6, 1, 0, 2, 1, 3, 4, 1, 1,
3, 5, 4, 3, 6, 0, 2, 6, 0, 0, 1, 0, 6, 0, 0, 4, 0, 0, 4, 3, 3, 1, 0, 1,
1, 3, 1, 6, 2, 0, 4, 4, 4, 2, 0, 3, 0, 3, 0, 1, 1, 0, 1, 0, 0, 0, 3, 1,
2, 2, 0, 1, 1, 1, 0, 1, 0, 6, 1, 1, 5, 1, 2, 3, 3, 0, 0, 1, 3, 0, 1, 0,
1, 1, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 6, 1, 1, 1, 3, 4, 1, 1, 4, 4, 1, 4, 1, 1, 0, 1, 1, 1, 4, 4, 0, 1, 0,
1, 0, 1, 1, 0, 6, 0, 6, 1, 6, 0, 6, 6, 2, 6, 1, 6, 0, 4, 1, 0, 6, 3, 3,
1, 3, 1, 1, 1, 0, 1, 6, 1, 1, 1, 2, 0, 0, 0, 6, 1, 5, 4, 3, 3, 6, 4, 5,
5, 6, 2, 1, 0, 0, 2, 4, 0, 0, 1, 0, 6, 1, 0, 4, 6, 6, 6, 1, 1, 0, 1, 1,
6, 1, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 0, 3, 2, 2, 1, 2, 0, 2, 2, 1, 2, 2, 2, 2, 2, 1,
1, 2, 2, 2, 1, 2, 6, 1, 2, 1, 1, 1, 2, 2, 2, 2, 2, 2, 6, 2, 2, 1, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 4, 2, 2, 1, 2, 2, 1, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 2, 2, 1, 2, 2,
2, 2, 1, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([1, 1, 3, 5, 1, 6, 3, 5, 6, 1, 0, 6, 1, 1, 1, 6, 4, 1, 1, 3, 1, 1, 1, 0,
0, 1, 4, 3, 1, 3, 1, 3, 4, 1, 1, 1, 1, 1, 1, 4, 0, 1, 1, 0, 1, 4, 1, 1,
1, 1, 1, 6, 3, 1, 3, 1, 1, 0, 1, 5, 1, 1, 1, 1, 1, 6, 0, 1, 6, 1, 1, 1,
3, 1, 1, 1, 3, 6, 1, 5, 1, 0, 1, 2, 1, 0, 1, 3, 1, 1, 1, 1, 1, 6, 5, 1,
1, 1, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 3, 4, 1, 1, 3, 4, 1, 4, 4, 4, 4, 1, 4, 1, 1, 6, 6, 1, 4, 6,
6, 4, 4, 3, 6, 4, 6, 4, 4, 4, 4, 4, 6, 4, 4, 4, 1, 1, 6, 1, 4, 1, 3, 1,
2, 4, 1, 1, 6, 4, 4, 4, 1, 6, 4, 1, 1, 3, 1, 4, 6, 3, 4, 1, 6, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 4, 1, 1, 4, 4, 4, 1, 6, 1, 6, 0, 0, 4, 1, 4, 1, 6,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 1, 3, 6, 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 1, 1, 1, 1, 1, 1, 5, 0, 1, 1,
0, 5, 5, 1, 0, 1, 1, 6, 1, 2, 2, 3, 1, 1, 5, 1, 1, 2, 4, 1, 3, 1, 1, 0,
5, 0, 1, 3, 5, 1, 0, 5, 1, 3, 4, 1, 3, 1, 2, 3, 5, 1, 1, 6, 1, 1, 1, 5,
5, 1, 1, 5, 2, 4, 1, 1, 6, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 4, 1,
1, 1, 1, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 6, 6, 2, 1, 1, 0, 1, 1, 1, 6, 3, 1, 2, 0, 1, 1, 3, 6, 4, 1, 1, 6,
2, 1, 0, 4, 0, 6, 6, 2, 0, 6, 1, 6, 1, 1, 1, 1, 6, 1, 6, 0, 6, 1, 6, 3,
3, 0, 1, 1, 1, 6, 1, 6, 6, 6, 1, 1, 1, 6, 1, 6, 1, 3, 1, 1, 1, 1, 1, 6,
1, 1, 0, 6, 6, 6, 6, 3, 6, 6, 1, 0, 1, 2, 3, 6, 3, 3, 3, 3, 2, 6, 0, 4,
1, 4, 1, 6], device='cuda:0')
[=======================================================>.........] Step: 1s705ms | Tot: 10s213ms | Loss: 4.115 | Acc: 34.000% (238/700) 7/7
Epoch: 50
[==============================================================>..] Step: 810ms | Tot: 58s233ms | Loss: 0.078 | Acc: 97.743% (3421/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 3, 3, 0, 6, 0, 0, 4, 0, 0, 3, 1, 0, 0, 3, 3, 0, 0, 1, 3, 0, 5, 6,
5, 1, 5, 6, 0, 0, 2, 6, 0, 0, 6, 0, 1, 0, 0, 4, 2, 0, 0, 3, 0, 0, 0, 0,
5, 0, 1, 6, 2, 0, 4, 4, 0, 2, 0, 3, 0, 3, 5, 3, 6, 0, 1, 0, 0, 0, 3, 0,
0, 0, 0, 5, 1, 1, 5, 5, 0, 0, 1, 0, 5, 5, 2, 3, 3, 0, 0, 5, 0, 5, 5, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 3, 1, 5, 0, 1, 1, 6, 1, 4, 1, 5, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 1,
6, 0, 0, 0, 5, 6, 0, 5, 0, 0, 0, 6, 0, 0, 6, 1, 4, 0, 5, 0, 1, 3, 3, 3,
6, 3, 1, 5, 1, 0, 1, 4, 6, 5, 5, 2, 0, 0, 0, 4, 1, 5, 0, 3, 3, 6, 4, 5,
5, 2, 0, 0, 5, 0, 0, 1, 0, 0, 1, 0, 4, 5, 0, 4, 4, 6, 0, 4, 6, 0, 5, 3,
6, 0, 5, 5], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([0, 2, 2, 2, 2, 6, 2, 2, 0, 6, 2, 2, 0, 2, 0, 2, 2, 3, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 6, 1, 0, 2, 3, 5, 2, 2, 2, 2, 2, 2, 2, 3, 2, 0, 2, 2,
2, 2, 2, 6, 2, 2, 2, 5, 2, 2, 2, 2, 5, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 5, 2, 2, 5, 2, 3, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
3, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 0, 5, 1, 6, 3, 5, 6, 1, 0, 6, 4, 5, 5, 4, 5, 1, 3, 3, 4, 1, 0, 5,
5, 5, 4, 5, 1, 3, 3, 0, 4, 5, 1, 3, 1, 0, 5, 4, 1, 5, 1, 0, 5, 4, 1, 6,
1, 4, 1, 6, 3, 1, 3, 1, 1, 0, 1, 5, 3, 1, 4, 3, 5, 6, 0, 1, 6, 0, 3, 4,
3, 0, 0, 3, 0, 6, 6, 5, 3, 1, 0, 2, 1, 1, 0, 4, 3, 0, 3, 1, 3, 6, 5, 1,
5, 3, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 3, 4, 6, 4, 4, 4, 4, 4, 3, 5, 1, 1, 4, 3, 4, 6,
4, 4, 4, 3, 6, 4, 4, 6, 4, 4, 4, 4, 4, 4, 4, 0, 5, 1, 4, 3, 4, 1, 4, 1,
0, 4, 5, 6, 6, 4, 3, 3, 2, 4, 0, 6, 3, 3, 5, 4, 6, 3, 1, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 6, 6, 4, 1, 5, 4, 4, 6, 3, 4, 6, 0, 0, 1, 1, 1, 6, 4,
5, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 6, 5, 1, 1, 5, 5, 3, 5, 1, 5, 0, 5, 3, 0, 1, 1, 3, 0, 6, 5, 5,
0, 5, 5, 1, 0, 1, 1, 6, 5, 5, 0, 1, 5, 3, 5, 5, 5, 5, 1, 4, 5, 5, 5, 0,
5, 6, 5, 5, 5, 1, 5, 5, 1, 0, 0, 5, 5, 5, 2, 3, 5, 5, 5, 4, 3, 1, 5, 5,
5, 5, 5, 5, 2, 4, 0, 0, 0, 1, 1, 5, 5, 3, 5, 3, 3, 0, 5, 5, 5, 5, 4, 5,
0, 3, 5, 5], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 5, 5, 1, 6, 3, 6, 2, 0, 1, 5, 5, 6, 4, 1, 5, 6,
2, 6, 0, 4, 3, 3, 3, 2, 0, 6, 6, 5, 5, 5, 0, 4, 6, 4, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 6, 1, 2, 3, 6, 3, 6, 0, 5, 3, 6, 3, 1, 5, 6,
5, 3, 0, 6, 0, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 3, 0, 6, 0, 4,
0, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s746ms | Tot: 10s279ms | Loss: 3.504 | Acc: 39.571% (277/700) 7/7
Epoch: 51
[==============================================================>..] Step: 785ms | Tot: 58s673ms | Loss: 0.090 | Acc: 97.314% (3406/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 1, 1, 6, 0, 1, 6, 0, 1, 3, 1, 0, 0, 5, 1, 1, 0, 1, 0, 3, 0, 0,
5, 1, 4, 2, 0, 0, 5, 3, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 0, 3, 0, 1, 0, 1,
5, 0, 0, 1, 0, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 1, 0, 0, 0, 3, 1,
0, 1, 0, 1, 1, 1, 0, 1, 0, 6, 1, 1, 5, 0, 0, 3, 3, 1, 1, 1, 2, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 6, 1, 5, 0, 3, 1, 3, 1, 4, 1, 2, 1, 1, 0, 0, 5, 1, 1, 4, 0, 0, 1, 1,
6, 0, 0, 1, 1, 6, 0, 0, 0, 6, 0, 6, 0, 5, 1, 1, 1, 1, 1, 1, 1, 0, 3, 3,
1, 2, 1, 5, 1, 1, 1, 1, 1, 1, 2, 2, 0, 0, 0, 5, 1, 5, 0, 1, 1, 6, 6, 5,
1, 2, 0, 1, 5, 0, 0, 1, 0, 0, 1, 0, 4, 1, 0, 1, 6, 6, 0, 1, 1, 0, 5, 1,
1, 1, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([0, 2, 2, 2, 2, 6, 2, 2, 2, 5, 2, 2, 0, 2, 0, 2, 2, 2, 2, 2, 2, 1, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 2, 1, 5, 5, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 0,
2, 2, 2, 5, 2, 2, 1, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 0, 5, 1, 6, 1, 5, 6, 5, 0, 6, 5, 3, 5, 1, 1, 1, 1, 3, 1, 2, 0, 1,
5, 5, 6, 1, 1, 3, 3, 5, 1, 1, 1, 1, 1, 0, 1, 4, 1, 5, 1, 3, 1, 5, 3, 1,
5, 1, 5, 6, 3, 1, 2, 5, 1, 0, 1, 5, 1, 1, 1, 1, 1, 6, 0, 1, 6, 0, 6, 5,
3, 0, 1, 4, 1, 6, 1, 5, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 6, 5, 1,
2, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 3, 6, 1, 1, 3, 0, 6, 4, 1, 4, 4, 1, 3, 5, 5, 1, 1, 1, 4, 0,
4, 4, 4, 1, 6, 4, 4, 6, 4, 4, 0, 4, 6, 6, 1, 0, 5, 1, 6, 5, 4, 1, 0, 1,
4, 4, 1, 0, 6, 4, 4, 1, 1, 4, 0, 1, 1, 2, 3, 4, 6, 3, 4, 5, 4, 4, 1, 1,
5, 1, 4, 0, 1, 5, 1, 5, 1, 1, 1, 4, 4, 2, 5, 1, 1, 0, 0, 1, 5, 1, 6, 6,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 1, 1, 1, 5, 5, 3, 5, 3, 5, 1, 1, 1, 5, 1, 1, 5, 1, 1, 5, 5,
0, 5, 5, 1, 0, 0, 1, 0, 5, 2, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 1, 5, 5, 3,
5, 3, 5, 1, 5, 1, 5, 5, 1, 1, 1, 0, 5, 5, 2, 5, 5, 5, 1, 1, 5, 1, 5, 5,
5, 5, 5, 5, 1, 6, 5, 0, 1, 1, 1, 1, 5, 6, 1, 5, 1, 5, 5, 5, 5, 3, 5, 5,
0, 1, 1, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 6, 3, 1, 2, 1, 3, 1, 0, 1, 1, 6, 3, 6, 2, 0, 1, 1, 3, 6, 5, 1, 3, 6,
2, 6, 0, 4, 3, 0, 0, 5, 3, 6, 6, 1, 2, 2, 3, 1, 6, 1, 6, 0, 6, 2, 0, 3,
1, 0, 2, 1, 1, 6, 1, 6, 0, 0, 1, 2, 5, 6, 3, 6, 0, 0, 1, 1, 1, 1, 5, 6,
5, 0, 2, 1, 6, 6, 2, 1, 6, 2, 0, 0, 1, 1, 3, 1, 0, 3, 6, 1, 0, 2, 2, 1,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s695ms | Tot: 10s204ms | Loss: 3.765 | Acc: 38.714% (271/700) 7/7
Epoch: 52
[==============================================================>..] Step: 835ms | Tot: 58s479ms | Loss: 0.118 | Acc: 95.857% (3355/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 3, 5, 0, 6, 0, 0, 6, 0, 0, 1, 0, 0, 0, 0, 2, 0, 0, 0, 1, 4, 5, 6,
0, 1, 4, 6, 0, 0, 0, 0, 2, 0, 4, 0, 5, 0, 0, 2, 0, 0, 0, 3, 3, 1, 0, 0,
5, 0, 0, 6, 0, 0, 4, 4, 0, 2, 0, 3, 0, 0, 0, 1, 6, 0, 1, 2, 0, 6, 2, 3,
0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 5, 0, 2, 3, 3, 0, 0, 5, 2, 0, 5, 0,
0, 0, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([4, 6, 1, 0, 0, 1, 1, 6, 1, 4, 4, 0, 6, 1, 0, 4, 5, 4, 1, 4, 0, 0, 0, 0,
6, 0, 0, 1, 4, 6, 0, 2, 0, 6, 0, 6, 0, 0, 5, 0, 0, 0, 0, 0, 1, 6, 1, 3,
4, 2, 0, 6, 1, 0, 0, 6, 0, 1, 1, 6, 0, 0, 0, 4, 2, 5, 4, 0, 2, 1, 4, 5,
5, 2, 0, 1, 0, 2, 6, 4, 0, 0, 0, 3, 4, 1, 6, 4, 6, 6, 0, 4, 2, 0, 1, 1,
6, 0, 5, 4], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 0, 2, 0, 2, 2, 5, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 6, 2, 2, 2, 1, 5, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 5, 2, 2, 4, 0, 1, 2, 2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2,
2, 2, 1, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 0, 2, 1, 6, 3, 5, 6, 0, 0, 6, 4, 0, 0, 4, 4, 0, 1, 5, 4, 2, 2, 1,
5, 0, 4, 1, 1, 2, 1, 0, 4, 5, 1, 4, 6, 0, 5, 4, 3, 4, 2, 0, 0, 5, 1, 0,
1, 4, 5, 6, 3, 0, 4, 1, 1, 4, 1, 5, 1, 1, 2, 1, 5, 6, 6, 1, 6, 0, 6, 4,
0, 0, 1, 4, 1, 6, 1, 5, 3, 3, 6, 2, 1, 1, 0, 4, 4, 0, 1, 0, 5, 6, 5, 5,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 6, 1, 0, 4, 6, 4, 4, 4, 4, 3, 4, 5, 0, 6, 4, 0, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 6, 5, 1, 6, 1, 4, 5, 4, 0,
4, 4, 5, 0, 6, 4, 4, 4, 2, 4, 4, 6, 5, 0, 5, 4, 4, 3, 4, 4, 4, 4, 4, 6,
5, 1, 4, 4, 1, 4, 4, 5, 4, 4, 4, 4, 4, 2, 6, 4, 6, 0, 0, 4, 5, 4, 1, 4,
0, 0, 4, 6], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 0, 0, 5, 5, 0, 2, 3, 5, 2, 2, 1, 0, 4, 0, 5, 1, 4, 5, 5,
0, 5, 5, 1, 0, 1, 0, 6, 5, 2, 0, 1, 1, 5, 1, 5, 5, 4, 1, 4, 1, 5, 5, 0,
2, 5, 5, 1, 1, 1, 5, 5, 0, 0, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 1, 1,
5, 5, 5, 5, 2, 4, 0, 0, 0, 2, 1, 5, 0, 6, 0, 1, 5, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 6, 3, 4, 2, 1, 1, 0, 1, 2, 0, 6, 0, 4, 2, 0, 1, 6, 0, 6, 4, 6, 3, 6,
6, 6, 0, 4, 3, 0, 6, 2, 3, 6, 4, 1, 1, 2, 3, 1, 6, 1, 6, 0, 6, 2, 0, 2,
0, 0, 2, 1, 0, 6, 4, 6, 0, 0, 4, 2, 4, 6, 1, 6, 0, 0, 6, 6, 4, 4, 5, 6,
1, 0, 2, 6, 0, 6, 2, 1, 6, 6, 0, 0, 1, 6, 3, 5, 0, 6, 6, 0, 2, 6, 0, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s720ms | Tot: 10s255ms | Loss: 3.340 | Acc: 39.857% (279/700) 7/7
Epoch: 53
[==============================================================>..] Step: 827ms | Tot: 58s552ms | Loss: 0.062 | Acc: 97.743% (3421/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 0, 0, 0, 0, 0, 3, 3, 0, 0, 0, 3, 3, 0, 2, 0, 3, 3, 2, 2,
3, 0, 4, 0, 6, 0, 3, 4, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 3, 2, 3, 1, 0, 1,
5, 0, 0, 6, 2, 0, 4, 4, 0, 2, 0, 3, 0, 3, 0, 3, 4, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 5, 5, 3, 3, 0, 0, 0, 0, 0, 5, 0,
0, 0, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 0, 1, 5, 0, 3, 4, 3, 5, 4, 0, 5, 3, 0, 5, 0, 5, 4, 1, 0, 0, 0, 0, 0,
6, 0, 0, 3, 3, 6, 0, 0, 0, 0, 0, 0, 3, 0, 1, 0, 0, 0, 0, 0, 0, 3, 3, 3,
0, 5, 5, 0, 1, 0, 0, 0, 1, 0, 1, 2, 0, 0, 0, 0, 2, 0, 0, 3, 3, 2, 4, 5,
5, 2, 0, 0, 0, 3, 2, 0, 0, 0, 0, 0, 0, 3, 0, 4, 2, 6, 6, 0, 0, 0, 0, 3,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 5, 6, 2, 2, 2, 2, 2, 0, 3, 2, 0, 2, 2, 3, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 5, 2, 2, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 0,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3,
2, 2, 2, 5, 3, 2, 4, 3, 2, 2, 2, 2, 2, 3, 2, 2, 0, 2, 3, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 6, 5, 5, 6, 3, 5, 6, 2, 0, 4, 5, 3, 5, 4, 0, 0, 3, 3, 4, 3, 0, 0,
0, 5, 4, 3, 1, 3, 3, 0, 4, 2, 3, 0, 0, 2, 5, 4, 0, 5, 1, 3, 5, 4, 3, 5,
0, 5, 5, 6, 3, 5, 3, 0, 1, 0, 1, 0, 0, 5, 2, 6, 4, 2, 0, 1, 4, 0, 6, 0,
3, 0, 1, 4, 3, 6, 1, 5, 0, 0, 0, 0, 3, 0, 0, 3, 4, 1, 0, 1, 1, 6, 0, 0,
0, 0, 6, 0], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 3, 4, 1, 1, 4, 0, 0, 4, 4, 4, 4, 3, 3, 5, 5, 0, 0, 0, 4, 6,
6, 4, 4, 3, 6, 4, 4, 6, 4, 4, 4, 4, 4, 4, 4, 2, 5, 3, 4, 3, 4, 0, 3, 1,
2, 4, 0, 0, 6, 4, 4, 3, 1, 6, 4, 3, 3, 3, 1, 4, 6, 3, 4, 4, 6, 4, 4, 3,
5, 1, 4, 0, 1, 4, 1, 0, 3, 0, 3, 4, 4, 2, 3, 4, 3, 0, 0, 0, 5, 4, 3, 4,
5, 0, 4, 6], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 0, 3, 0, 1, 5, 5, 0, 0, 0, 5, 0, 3, 0, 0, 0, 2, 0, 5, 0, 5, 0,
0, 5, 5, 1, 5, 2, 5, 0, 5, 0, 0, 3, 0, 5, 5, 0, 0, 5, 3, 0, 3, 5, 5, 0,
5, 3, 5, 2, 5, 5, 0, 5, 0, 1, 5, 0, 5, 5, 2, 5, 5, 5, 1, 4, 1, 1, 1, 5,
5, 0, 5, 5, 2, 4, 0, 0, 0, 3, 3, 5, 0, 1, 3, 3, 0, 0, 5, 5, 5, 0, 4, 5,
0, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 6, 2, 1, 3, 3, 3, 2, 1, 6, 3, 3, 2, 0, 3, 0, 3, 0, 4, 1, 0, 6,
2, 4, 0, 4, 3, 5, 3, 2, 0, 6, 0, 6, 5, 4, 5, 4, 6, 5, 6, 0, 3, 1, 0, 3,
1, 0, 1, 1, 1, 6, 0, 0, 0, 0, 5, 0, 5, 3, 1, 2, 0, 1, 1, 1, 1, 0, 5, 6,
4, 0, 0, 0, 2, 6, 6, 3, 6, 6, 3, 0, 0, 0, 0, 4, 0, 3, 0, 3, 4, 0, 0, 4,
0, 3, 0, 6], device='cuda:0')
[=======================================================>.........] Step: 1s724ms | Tot: 10s295ms | Loss: 3.657 | Acc: 35.286% (247/700) 7/7
Epoch: 54
[==============================================================>..] Step: 794ms | Tot: 58s570ms | Loss: 0.038 | Acc: 99.029% (3466/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 1, 0, 1, 1, 0, 0, 1, 6, 0, 6, 1, 1, 0, 5, 1, 1, 0, 2, 1, 0, 1, 5, 1,
5, 1, 4, 6, 1, 0, 5, 5, 0, 0, 4, 0, 5, 0, 0, 0, 0, 0, 0, 3, 6, 1, 0, 1,
5, 0, 1, 6, 2, 1, 4, 4, 0, 2, 0, 6, 0, 3, 0, 1, 1, 0, 1, 2, 0, 0, 3, 3,
0, 2, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 5, 0, 2, 1, 3, 1, 0, 5, 0, 0, 5, 0,
1, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 1, 0, 4, 1, 6, 1, 4, 4, 1, 4, 1, 5, 0, 5, 1, 1, 4, 0, 0, 1, 1,
6, 4, 0, 1, 5, 6, 0, 0, 1, 6, 0, 6, 0, 5, 1, 1, 1, 0, 5, 0, 1, 6, 1, 3,
1, 2, 1, 5, 1, 0, 1, 6, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 1, 6, 4, 5,
5, 6, 0, 1, 1, 0, 2, 1, 1, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
5, 0, 5, 1], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 1, 2, 2, 2, 6, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 5, 2, 5,
1, 2, 2, 2, 2, 2, 2, 1, 2, 2, 1, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 5, 2, 2, 2,
2, 2, 2, 5, 2, 2, 4, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 1, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 1, 5, 1, 6, 1, 5, 6, 5, 0, 4, 5, 1, 5, 4, 1, 1, 1, 5, 4, 1, 5, 5,
5, 1, 4, 1, 1, 2, 1, 0, 4, 5, 1, 1, 1, 1, 5, 4, 1, 5, 1, 3, 1, 5, 1, 1,
1, 1, 5, 6, 0, 1, 2, 5, 1, 0, 1, 5, 1, 1, 2, 1, 5, 1, 0, 1, 6, 1, 1, 1,
0, 1, 1, 3, 1, 6, 1, 5, 1, 1, 1, 1, 1, 1, 0, 0, 4, 0, 1, 1, 1, 6, 5, 1,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 0, 1, 4, 4, 4, 4, 1, 4, 5, 5, 1, 1, 1, 4, 4,
4, 4, 4, 3, 6, 4, 4, 6, 4, 4, 4, 4, 4, 4, 4, 0, 5, 1, 6, 5, 4, 1, 0, 1,
4, 4, 5, 1, 6, 4, 4, 1, 1, 4, 4, 1, 5, 1, 5, 4, 6, 5, 4, 4, 4, 4, 4, 1,
3, 1, 4, 4, 1, 4, 6, 5, 4, 1, 4, 4, 4, 2, 6, 4, 1, 0, 0, 1, 5, 4, 6, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 1, 5, 1, 5, 1, 3, 5, 5, 5, 1, 1, 1, 5, 1, 1, 5, 5, 1, 5, 5,
0, 5, 5, 1, 5, 1, 1, 6, 5, 2, 1, 1, 1, 5, 5, 5, 5, 1, 1, 5, 5, 5, 5, 0,
5, 5, 5, 1, 5, 1, 5, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 1, 1, 1, 1, 5, 5,
5, 1, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 1, 5, 5, 1, 1, 0, 5, 5, 5, 5, 5, 5,
5, 5, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 6, 2, 1, 1, 5, 1, 6, 5, 1, 3, 6,
2, 6, 2, 4, 3, 3, 6, 2, 1, 6, 4, 5, 1, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 1, 6, 1, 6, 5, 2, 5, 4, 1, 6, 0, 0, 1, 6, 1, 1, 5, 6,
1, 1, 0, 1, 2, 6, 2, 1, 6, 6, 0, 0, 1, 1, 0, 6, 0, 3, 6, 1, 2, 2, 0, 5,
2, 4, 1, 6], device='cuda:0')
[=======================================================>.........] Step: 1s741ms | Tot: 10s245ms | Loss: 3.700 | Acc: 40.000% (280/700) 7/7
Epoch: 55
[==============================================================>..] Step: 775ms | Tot: 58s457ms | Loss: 0.039 | Acc: 98.800% (3458/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 3, 3, 0, 0, 0, 0, 6, 0, 6, 3, 0, 0, 0, 3, 3, 0, 2, 1, 0, 3, 5, 1,
5, 0, 4, 6, 0, 0, 2, 5, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 6, 3, 6, 0, 0, 1,
5, 5, 1, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 3, 0, 6, 5, 2, 0, 0, 3, 3,
0, 0, 0, 3, 0, 1, 0, 0, 0, 6, 1, 5, 5, 2, 6, 3, 3, 0, 0, 0, 0, 0, 4, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 1, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 3, 6, 5, 5, 1, 4, 4, 0, 0, 3,
6, 4, 0, 1, 5, 6, 0, 5, 1, 6, 0, 6, 0, 0, 1, 0, 4, 0, 5, 5, 1, 6, 0, 3,
6, 2, 1, 6, 1, 0, 4, 6, 1, 1, 2, 2, 2, 0, 0, 5, 2, 5, 0, 3, 2, 1, 4, 2,
5, 2, 0, 1, 5, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 5, 0, 6, 0, 4, 2, 0, 5, 1,
6, 0, 5, 5], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 3, 2, 0, 2, 1, 5, 2, 2, 2, 2, 2, 2, 2, 5, 2, 0, 2, 2,
2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 5, 2, 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 6, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 0, 2, 1, 6, 3, 5, 6, 2, 2, 6, 5, 3, 5, 4, 0, 3, 5, 5, 4, 2, 2, 0,
5, 5, 4, 5, 1, 3, 3, 0, 4, 1, 1, 5, 1, 2, 5, 4, 3, 5, 2, 3, 6, 5, 1, 6,
5, 1, 5, 6, 3, 0, 2, 5, 1, 0, 1, 5, 3, 1, 2, 1, 3, 0, 0, 5, 6, 2, 6, 1,
0, 0, 1, 5, 5, 6, 6, 5, 3, 3, 1, 2, 1, 0, 0, 0, 3, 0, 1, 1, 5, 6, 5, 3,
5, 5, 3, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 3, 3, 5, 1, 5, 0, 0, 4, 4,
4, 4, 4, 6, 6, 4, 4, 6, 4, 4, 4, 4, 4, 4, 4, 3, 5, 1, 6, 3, 4, 5, 4, 1,
4, 4, 5, 1, 6, 4, 4, 3, 2, 4, 0, 6, 3, 3, 5, 4, 4, 3, 4, 4, 4, 4, 4, 3,
3, 5, 4, 4, 1, 4, 6, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 1, 4, 2, 6,
0, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 6, 5, 1, 1, 5, 5, 3, 5, 3, 5, 5, 2, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 0, 5, 1, 6, 5, 0, 0, 6, 1, 5, 5, 5, 5, 2, 1, 5, 3, 5, 5, 0,
5, 5, 5, 0, 0, 1, 0, 5, 1, 0, 0, 0, 5, 5, 2, 5, 5, 2, 5, 4, 5, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 2, 5, 5, 5, 3, 5, 5, 5, 5, 5, 5, 5, 3, 1, 5,
2, 5, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 3, 1, 0, 2, 1, 6, 3, 1, 2, 0, 2, 4, 2, 0, 5, 6, 5, 6,
2, 6, 2, 4, 0, 2, 6, 2, 0, 6, 4, 5, 2, 2, 5, 1, 6, 1, 6, 0, 6, 2, 2, 0,
0, 0, 2, 1, 6, 6, 3, 6, 0, 6, 1, 2, 5, 4, 3, 0, 0, 3, 1, 6, 3, 4, 5, 6,
2, 3, 0, 1, 0, 6, 2, 3, 6, 1, 0, 0, 1, 3, 3, 6, 6, 3, 6, 5, 0, 2, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s690ms | Tot: 10s214ms | Loss: 3.403 | Acc: 41.857% (293/700) 7/7
Saving..
Epoch: 56
[==============================================================>..] Step: 788ms | Tot: 58s266ms | Loss: 0.028 | Acc: 99.257% (3474/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([5, 6, 0, 3, 3, 0, 0, 0, 6, 0, 3, 3, 1, 0, 0, 3, 3, 0, 2, 1, 3, 4, 0, 1,
5, 1, 5, 6, 6, 0, 2, 6, 0, 0, 1, 0, 0, 0, 0, 4, 2, 0, 0, 3, 0, 1, 0, 1,
5, 0, 1, 6, 2, 0, 1, 4, 0, 5, 0, 3, 0, 3, 3, 5, 1, 0, 3, 2, 0, 0, 3, 0,
2, 2, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 5, 5, 0, 3, 3, 0, 0, 0, 0, 0, 5, 0,
5, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 4, 1, 5, 0, 1, 1, 0, 1, 4, 4, 3, 4, 1, 5, 0, 5, 5, 2, 4, 5, 0, 0, 1,
6, 4, 0, 3, 4, 6, 0, 0, 5, 6, 0, 6, 6, 0, 1, 1, 1, 0, 5, 5, 1, 6, 3, 3,
1, 3, 1, 6, 1, 0, 1, 1, 1, 0, 2, 6, 0, 0, 1, 4, 2, 5, 0, 1, 5, 6, 4, 5,
5, 2, 0, 1, 1, 0, 5, 4, 0, 0, 0, 0, 4, 1, 0, 4, 0, 6, 6, 0, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 0, 2, 0, 2, 2, 3, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 0, 2, 4, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 3, 2, 2, 2,
2, 2, 1, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 3, 5, 1, 3, 3, 5, 6, 5, 0, 6, 5, 3, 5, 4, 0, 1, 3, 3, 4, 0, 0, 0,
5, 1, 6, 5, 1, 3, 3, 0, 4, 1, 1, 0, 1, 1, 5, 4, 1, 5, 2, 3, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 5, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 5, 5, 1, 1, 1, 2, 1, 0, 0, 3, 4, 0, 1, 1, 5, 6, 5, 1,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 3, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 1, 3, 0, 4, 6,
4, 4, 4, 5, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 3, 5, 3, 6, 5, 4, 5, 0, 1,
0, 4, 1, 6, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 5, 4, 6, 3, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 0, 4, 1, 4, 4, 4, 6, 6, 4, 1, 0, 1, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 0, 3, 1, 1, 5, 5, 3, 5, 3, 5, 2, 1, 1, 0, 0, 1, 5, 0, 4, 5, 5,
5, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 5, 1, 3, 5, 5, 5, 5, 1, 5, 1, 5, 5, 0,
5, 0, 5, 3, 5, 5, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 5, 5, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 1, 5, 1, 1, 1, 1, 5, 5, 5, 5, 5, 1, 0, 5, 5, 5, 5, 4, 5,
5, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 3, 0, 1, 1, 1, 6, 1, 1, 2, 0, 3, 4, 1, 6, 5, 1, 5, 6,
2, 6, 0, 4, 3, 5, 6, 2, 0, 6, 4, 5, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 6,
1, 0, 5, 1, 1, 6, 1, 6, 0, 0, 0, 2, 5, 4, 3, 6, 0, 5, 1, 6, 3, 4, 5, 6,
2, 1, 2, 1, 0, 6, 2, 3, 6, 1, 0, 0, 1, 3, 5, 4, 0, 3, 3, 3, 0, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s697ms | Tot: 10s231ms | Loss: 3.432 | Acc: 40.143% (281/700) 7/7
Epoch: 57
[==============================================================>..] Step: 777ms | Tot: 58s416ms | Loss: 0.025 | Acc: 99.429% (3480/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 0, 0, 3, 0, 6, 0, 0, 6, 0, 6, 3, 0, 0, 0, 3, 3, 0, 2, 1, 3, 0, 2, 1,
5, 1, 4, 6, 0, 0, 2, 6, 2, 0, 1, 0, 0, 0, 0, 2, 0, 0, 3, 3, 6, 6, 0, 1,
5, 0, 1, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 6, 1, 0, 5, 2, 0, 0, 0, 0,
2, 2, 0, 1, 1, 1, 0, 1, 0, 6, 1, 0, 5, 2, 2, 1, 3, 1, 1, 1, 2, 0, 5, 0,
2, 1, 0, 6], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([6, 6, 1, 5, 0, 1, 1, 3, 1, 4, 0, 2, 6, 1, 5, 4, 1, 5, 1, 4, 6, 0, 0, 1,
6, 4, 0, 3, 3, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 4, 0, 0, 0, 1, 6, 1, 3,
1, 2, 1, 0, 1, 0, 4, 6, 1, 0, 2, 6, 2, 0, 6, 4, 2, 5, 4, 3, 2, 6, 4, 2,
5, 2, 0, 1, 0, 3, 0, 1, 0, 0, 0, 0, 4, 1, 0, 1, 0, 6, 6, 4, 2, 0, 0, 1,
0, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 6, 2, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 6, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 0, 2, 1, 6, 3, 5, 6, 2, 0, 6, 4, 3, 5, 4, 0, 1, 1, 5, 4, 2, 2, 0,
5, 0, 6, 5, 1, 3, 1, 0, 1, 1, 1, 1, 0, 0, 1, 4, 6, 5, 1, 0, 3, 6, 1, 1,
1, 1, 1, 6, 3, 1, 2, 0, 1, 0, 1, 5, 3, 1, 2, 1, 6, 6, 0, 1, 6, 0, 1, 1,
0, 0, 6, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 0, 0, 1, 1, 1, 1, 1, 1, 6, 5, 3,
0, 0, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 6, 4, 4, 6, 1, 1, 4, 0, 1, 4, 4, 4, 4, 1, 3, 5, 1, 5, 3, 0, 4, 6,
4, 4, 4, 5, 6, 4, 4, 6, 4, 4, 4, 4, 6, 6, 1, 0, 1, 1, 6, 1, 4, 1, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 0, 6, 5, 0, 0, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 0, 1, 4, 6, 6, 4, 1, 4, 4, 4, 2, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
0, 4, 4, 3], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 1, 5, 1, 3, 1, 1, 5, 1, 3, 5, 1, 5, 2, 2, 1, 0, 0, 5, 5, 1, 6, 5, 0,
2, 5, 5, 1, 0, 2, 1, 6, 5, 0, 0, 6, 1, 5, 5, 5, 0, 1, 1, 5, 1, 5, 5, 0,
5, 0, 5, 0, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 5, 4, 1, 1, 1, 1,
5, 5, 5, 5, 2, 6, 0, 2, 0, 1, 1, 5, 3, 6, 1, 5, 1, 5, 5, 5, 5, 0, 1, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 0, 1, 2, 1, 1, 2, 1, 6, 1, 6, 1, 1, 2, 0, 1, 4, 1, 6, 5, 0, 3, 6,
2, 6, 2, 4, 3, 6, 6, 2, 0, 6, 6, 5, 2, 5, 0, 1, 6, 1, 6, 0, 6, 2, 0, 3,
1, 0, 2, 1, 6, 6, 3, 6, 0, 0, 1, 2, 4, 6, 3, 6, 0, 1, 6, 6, 3, 1, 5, 6,
1, 3, 0, 0, 0, 6, 6, 3, 6, 2, 0, 0, 6, 3, 6, 4, 6, 3, 6, 1, 2, 6, 3, 0,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s730ms | Tot: 10s232ms | Loss: 3.598 | Acc: 38.571% (270/700) 7/7
Epoch: 58
[==============================================================>..] Step: 806ms | Tot: 1m3s | Loss: 0.013 | Acc: 99.629% (3487/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 3, 3, 0, 0, 0, 0, 3, 1, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 1, 0, 1,
5, 5, 4, 0, 0, 0, 2, 4, 0, 0, 4, 0, 5, 0, 0, 4, 2, 0, 0, 3, 1, 1, 0, 1,
5, 0, 1, 6, 0, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 0, 0,
2, 2, 0, 5, 1, 1, 0, 5, 0, 0, 1, 0, 5, 5, 5, 3, 3, 1, 0, 1, 0, 0, 4, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 6, 1, 5, 0, 1, 4, 3, 1, 4, 4, 1, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 3, 4, 6, 0, 6, 0, 6, 0, 6, 0, 0, 1, 4, 1, 0, 0, 5, 1, 0, 3, 3,
1, 2, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 4, 2, 5, 4, 4, 6, 6, 4, 5,
5, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 4, 1, 0, 4, 2, 6, 0, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 0, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 3, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 4, 2, 3, 2, 2, 2, 0, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 0, 5, 1, 3, 3, 5, 6, 3, 0, 6, 4, 1, 5, 4, 5, 1, 5, 3, 4, 1, 0, 0,
5, 0, 4, 5, 1, 3, 1, 0, 4, 5, 1, 0, 0, 0, 5, 4, 3, 5, 1, 0, 0, 4, 1, 1,
5, 1, 5, 6, 3, 1, 0, 4, 1, 4, 1, 5, 3, 1, 2, 1, 3, 0, 0, 1, 6, 2, 1, 1,
3, 0, 1, 4, 3, 6, 1, 5, 1, 1, 1, 2, 1, 1, 0, 4, 4, 1, 1, 1, 1, 6, 5, 0,
5, 0, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 4, 4, 4, 1, 4, 4, 4, 4, 3, 1, 5, 1, 0, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 6, 4, 4, 2, 5, 1, 4, 5, 4, 5, 4, 1,
4, 4, 1, 0, 4, 4, 4, 4, 1, 4, 0, 1, 5, 3, 0, 4, 6, 3, 4, 4, 6, 4, 4, 1,
3, 1, 4, 4, 1, 4, 3, 5, 4, 0, 4, 4, 4, 6, 0, 1, 4, 0, 0, 1, 1, 4, 2, 6,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 0, 5, 1, 0, 1, 1, 5, 5, 0, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 0, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 2, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 0, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 5, 1, 0, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 5, 5, 3, 1, 0, 5, 5, 5, 5, 4, 5,
5, 1, 5, 0], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 1, 0, 1, 2, 1, 6, 3, 1, 2, 0, 3, 4, 1, 6, 4, 1, 5, 6,
2, 6, 0, 4, 0, 6, 6, 2, 0, 6, 4, 1, 5, 5, 5, 4, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 4, 6, 3, 0, 0, 1, 1, 1, 3, 4, 5, 6,
1, 1, 0, 0, 0, 6, 6, 3, 6, 6, 0, 0, 1, 2, 0, 6, 0, 3, 0, 0, 0, 6, 2, 4,
0, 4, 1, 6], device='cuda:0')
[=======================================================>.........] Step: 1s817ms | Tot: 12s267ms | Loss: 3.390 | Acc: 39.143% (274/700) 7/7
Epoch: 59
[==============================================================>..] Step: 788ms | Tot: 1m1s | Loss: 0.006 | Acc: 99.943% (3498/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 4, 0, 3, 1, 1, 0, 0, 3, 2, 0, 2, 0, 3, 1, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 2, 0, 4, 0, 5, 0, 0, 4, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 1, 6, 2, 0, 4, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 1, 1, 1, 5, 1, 0, 6, 1, 1, 5, 5, 5, 0, 3, 1, 0, 5, 0, 0, 5, 0,
5, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 1, 2, 4, 1, 5, 4, 5, 4, 1, 4, 6, 0, 0, 1,
6, 4, 0, 3, 4, 6, 0, 6, 0, 6, 0, 6, 0, 0, 1, 1, 1, 0, 5, 5, 1, 6, 3, 3,
6, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 6, 0, 0, 0, 4, 2, 5, 4, 1, 2, 6, 4, 2,
5, 2, 0, 1, 1, 0, 5, 4, 0, 0, 0, 0, 4, 1, 0, 4, 6, 6, 6, 4, 2, 0, 5, 1,
6, 4, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 0, 5, 1, 6, 3, 5, 6, 5, 0, 6, 5, 1, 5, 4, 4, 1, 1, 3, 4, 1, 0, 0,
5, 0, 6, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 1, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 2, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 0, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 0, 0, 4, 4, 1, 1, 1, 1, 6, 5, 3,
5, 5, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 1, 1, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 1, 2, 1, 1, 4, 5, 4, 5, 4, 1,
4, 4, 1, 0, 4, 4, 4, 1, 1, 4, 0, 6, 3, 3, 0, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 1, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 1, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 5, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 0, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 2, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 0, 5, 1, 0, 5, 1, 0, 0, 0, 5, 5, 2, 5, 5, 2, 5, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 5, 1, 1, 5, 5, 3, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 3, 6, 2, 3, 6, 6, 5, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 6, 6, 3, 6, 0, 0, 4, 2, 3, 6, 3, 6, 0, 5, 1, 6, 3, 1, 5, 6,
1, 1, 0, 0, 2, 6, 2, 3, 6, 2, 0, 0, 1, 3, 3, 6, 0, 3, 5, 1, 2, 6, 2, 1,
2, 4, 1, 6], device='cuda:0')
[=======================================================>.........] Step: 1s727ms | Tot: 10s188ms | Loss: 3.379 | Acc: 39.571% (277/700) 7/7
Epoch: 60
[==============================================================>..] Step: 787ms | Tot: 59s428ms | Loss: 0.003 | Acc: 99.914% (3497/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 1, 0, 4, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 0, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 4, 0, 5, 0, 0, 4, 2, 0, 0, 3, 0, 1, 0, 1,
5, 0, 0, 6, 0, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 1, 2, 0, 0, 3, 0,
2, 2, 0, 1, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 4, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 2, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 0, 1, 0, 0, 6, 0, 0, 1, 5, 1, 0, 4, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 1, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 2, 6, 6, 4, 2, 0, 0, 3,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 1, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 0, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 0, 5, 1, 6, 3, 5, 6, 2, 0, 6, 4, 1, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
5, 0, 6, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 6, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 3, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 0, 1, 2, 1, 0, 0, 3, 4, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 1, 6, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 1, 1, 4, 5, 4, 5, 4, 1,
5, 4, 1, 6, 4, 4, 4, 1, 1, 4, 4, 6, 5, 3, 4, 4, 6, 3, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 5, 0, 4, 4, 4, 4, 6, 6, 4, 4, 0, 0, 1, 1, 4, 2, 6,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 1, 1, 1, 0, 1, 5, 5, 0, 0, 5, 5,
0, 5, 5, 1, 0, 0, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 2, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 5, 1, 4, 1, 1, 5, 5,
5, 1, 5, 5, 2, 4, 5, 1, 6, 1, 1, 5, 5, 5, 5, 3, 1, 0, 5, 5, 5, 5, 4, 5,
5, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 1, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 1, 0, 0, 0, 6, 2, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 0, 6, 0, 4,
2, 6, 1, 6], device='cuda:0')
[=======================================================>.........] Step: 1s675ms | Tot: 10s123ms | Loss: 3.339 | Acc: 40.429% (283/700) 7/7
Epoch: 61
[==============================================================>..] Step: 755ms | Tot: 57s681ms | Loss: 0.002 | Acc: 99.971% (3499/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 3, 0, 2, 0, 3, 4, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 4, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 1, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 0, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 4, 5, 5, 2, 4, 6, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 0, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 0, 1, 3,
1, 2, 1, 6, 1, 0, 1, 4, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 2, 6, 4, 5,
5, 2, 0, 1, 1, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 0, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 1, 5, 4, 0, 1, 1, 5, 4, 1, 0, 0,
5, 5, 6, 1, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 6, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 6, 0, 0, 1, 6, 0, 1, 1,
0, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 1, 1, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 4, 4, 4, 1, 1, 4, 4, 6, 5, 3, 0, 4, 4, 5, 4, 4, 6, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 1, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 0, 1, 1, 5, 5, 3, 5, 3, 5, 1, 1, 1, 0, 1, 0, 5, 0, 6, 5, 5,
0, 5, 5, 1, 0, 0, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 2, 1, 5, 5, 3, 5, 5, 1, 5, 5, 5, 5, 5, 1, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 1, 4, 5, 6,
1, 0, 0, 0, 2, 6, 2, 3, 6, 1, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 1, 6], device='cuda:0')
[=======================================================>.........] Step: 1s742ms | Tot: 9s994ms | Loss: 3.325 | Acc: 40.571% (284/700) 7/7
Epoch: 62
[==============================================================>..] Step: 836ms | Tot: 57s982ms | Loss: 0.002 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 0, 0, 3, 1, 1, 0, 0, 3, 1, 0, 2, 0, 3, 4, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 4, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 1, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 6, 0, 0, 1,
6, 4, 0, 3, 4, 6, 0, 0, 1, 6, 0, 6, 0, 0, 1, 5, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 4, 1, 1, 1, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 1, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 1, 5, 4, 0, 1, 1, 5, 4, 1, 0, 0,
5, 1, 4, 1, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 0, 0, 1, 6, 0, 1, 1,
0, 0, 1, 3, 3, 6, 1, 5, 1, 0, 1, 2, 1, 1, 0, 3, 4, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 4, 4, 4, 1, 1, 4, 4, 1, 5, 3, 5, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 4, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 1, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 1, 1, 1, 0, 1, 5, 5, 0, 0, 5, 5,
0, 5, 5, 1, 0, 0, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 5, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 4, 1, 1, 5, 5, 3, 5, 5, 1, 0, 5, 5, 5, 5, 1, 5,
5, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 6, 6, 2, 0, 6, 6, 1, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 1, 0, 0, 2, 6, 6, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 5, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s700ms | Tot: 10s81ms | Loss: 3.297 | Acc: 40.714% (285/700) 7/7
Epoch: 63
[==============================================================>..] Step: 744ms | Tot: 57s958ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 0, 0, 3, 1, 1, 0, 0, 3, 3, 0, 2, 0, 3, 4, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 4, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 0, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 1,
6, 0, 0, 3, 4, 6, 0, 0, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 1, 5, 4, 0, 1, 1, 5, 4, 1, 0, 5,
5, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 1, 1, 0, 5, 4, 1, 5, 1, 0, 6, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
0, 0, 1, 3, 3, 6, 1, 5, 1, 1, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 5, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 0, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 1, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 0, 5, 5,
0, 5, 5, 1, 0, 0, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 3, 0, 5, 1, 0, 5, 5, 5, 5, 1, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 3, 6, 3, 6, 0, 1, 6, 1, 3, 1, 5, 6,
1, 1, 0, 0, 0, 6, 6, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s746ms | Tot: 10s583ms | Loss: 3.245 | Acc: 39.429% (276/700) 7/7
Epoch: 64
[==============================================================>..] Step: 836ms | Tot: 57s750ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 0, 0, 3, 1, 1, 0, 0, 3, 3, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 0, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 3, 6, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 3, 4, 6, 0, 0, 1, 6, 0, 6, 0, 0, 1, 1, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 1, 5, 4, 0, 1, 1, 5, 4, 1, 0, 0,
5, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
0, 0, 1, 3, 3, 6, 1, 5, 1, 0, 1, 2, 1, 1, 0, 4, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 4, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 1, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 0, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 2, 1, 5, 5, 3, 5, 5, 1, 0, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 3, 6, 3, 6, 0, 1, 1, 1, 3, 1, 5, 6,
1, 1, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 5, 1, 2, 6, 2, 4,
2, 6, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s710ms | Tot: 10s314ms | Loss: 3.200 | Acc: 39.429% (276/700) 7/7
Epoch: 65
[==============================================================>..] Step: 872ms | Tot: 59s704ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 0, 0, 3, 1, 1, 0, 0, 3, 3, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 4, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 1, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 3, 0, 5, 5, 3, 4, 4, 0, 0, 1,
6, 4, 0, 3, 4, 6, 0, 0, 1, 6, 0, 6, 0, 0, 1, 1, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 5, 4, 1, 0, 0,
5, 0, 4, 5, 1, 3, 1, 0, 4, 1, 1, 1, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 0, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 1, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 4, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 1, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 0, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 2, 1, 5, 5, 3, 5, 5, 1, 0, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 3, 6, 3, 6, 0, 1, 1, 1, 3, 1, 5, 6,
1, 1, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s704ms | Tot: 10s498ms | Loss: 3.167 | Acc: 40.000% (280/700) 7/7
Epoch: 66
[==============================================================>..] Step: 805ms | Tot: 58s459ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 3, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 4, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 3, 6, 5, 5, 1, 4, 4, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 0, 1, 6, 0, 6, 0, 0, 1, 1, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 6, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 1, 5, 4, 0, 1, 1, 5, 4, 1, 0, 0,
5, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 1, 1, 2, 1, 1, 0, 4, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 1, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 0, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 3, 5, 5, 1, 0, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 3, 6, 3, 6, 0, 1, 1, 1, 3, 1, 5, 6,
1, 1, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s745ms | Tot: 10s214ms | Loss: 3.134 | Acc: 39.714% (278/700) 7/7
Epoch: 67
[==============================================================>..] Step: 805ms | Tot: 58s280ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 3, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 4, 0, 5, 0, 0, 4, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 6, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 0, 1, 6, 0, 6, 0, 0, 1, 5, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 1, 5, 4, 0, 1, 1, 5, 4, 1, 0, 5,
5, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 1, 1, 2, 1, 1, 0, 4, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 0, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 3, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 1, 0, 0, 0, 6, 2, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s685ms | Tot: 10s183ms | Loss: 3.099 | Acc: 39.857% (279/700) 7/7
Epoch: 68
[==============================================================>..] Step: 807ms | Tot: 58s674ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 0, 0, 3, 1, 1, 0, 0, 3, 3, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 1,
6, 4, 0, 3, 4, 6, 0, 0, 1, 6, 0, 6, 0, 0, 1, 5, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 5, 4, 1, 0, 5,
5, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 1, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 5, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 0, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 0, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 3, 5, 5, 1, 3, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s704ms | Tot: 10s222ms | Loss: 3.038 | Acc: 39.714% (278/700) 7/7
Epoch: 69
[==============================================================>..] Step: 791ms | Tot: 58s462ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 0, 0, 3, 1, 1, 0, 0, 3, 3, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 4, 0, 5, 0, 0, 4, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 3, 4, 4, 0, 0, 0,
6, 4, 0, 3, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 5, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 5, 4, 1, 0, 5,
5, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 5, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 5, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 6, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 0, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 3, 5, 5, 1, 3, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 3, 6, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s717ms | Tot: 10s217ms | Loss: 3.011 | Acc: 39.714% (278/700) 7/7
Epoch: 70
[==============================================================>..] Step: 821ms | Tot: 58s217ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 5, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 1, 5, 4, 0, 1, 1, 5, 4, 1, 0, 5,
5, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 6, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 3, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s714ms | Tot: 10s292ms | Loss: 2.999 | Acc: 40.429% (283/700) 7/7
Epoch: 71
[==============================================================>..] Step: 829ms | Tot: 58s406ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 4, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 0, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 0], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 5, 4, 1, 0, 0,
0, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 6, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 6, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 3, 5, 5, 1, 3, 5, 5, 5, 5, 4, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 3, 6, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 0, 6, 2, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s710ms | Tot: 10s234ms | Loss: 2.966 | Acc: 40.286% (282/700) 7/7
Epoch: 72
[==============================================================>..] Step: 798ms | Tot: 58s689ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 3, 0, 3, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 5, 1, 5, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 1, 5, 4, 0, 1, 1, 5, 4, 1, 0, 5,
5, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 1, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 4, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 5, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 5, 4, 4, 1, 4, 3, 4, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 5, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 0, 5, 5,
5, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 5, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 4, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 1, 0, 0, 2, 6, 2, 3, 6, 1, 0, 0, 1, 2, 3, 6, 0, 3, 5, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s727ms | Tot: 10s224ms | Loss: 2.932 | Acc: 41.143% (288/700) 7/7
Epoch: 73
[==============================================================>..] Step: 787ms | Tot: 58s463ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 3, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 5, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
5, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 4, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 6, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 6, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 3, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 3, 6, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 1, 0, 0, 2, 6, 6, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s714ms | Tot: 10s208ms | Loss: 2.891 | Acc: 40.286% (282/700) 7/7
Epoch: 74
[==============================================================>..] Step: 779ms | Tot: 58s164ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 3, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 3, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 6, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 5, 5, 5, 1, 3, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s730ms | Tot: 10s214ms | Loss: 2.868 | Acc: 40.714% (285/700) 7/7
Epoch: 75
[==============================================================>..] Step: 820ms | Tot: 59s536ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 3, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 4, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 0, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 1, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 0, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 0, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 1, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
5, 1, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 1, 1, 2, 1, 1, 0, 4, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 4, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 4, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 6, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 2, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 5, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 5, 6, 2, 0, 6, 6, 1, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 1, 3, 1, 5, 6,
1, 1, 0, 0, 0, 6, 6, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s792ms | Tot: 10s548ms | Loss: 2.865 | Acc: 40.857% (286/700) 7/7
Epoch: 76
[==============================================================>..] Step: 768ms | Tot: 58s923ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 3, 0, 6, 1, 1, 0, 0, 3, 3, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 6, 0, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 0, 0,
0, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 3, 4, 4, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 1, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 4, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 1, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 5,
5, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 6, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 5, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 5, 5, 3, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 0, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 0, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s706ms | Tot: 10s265ms | Loss: 2.824 | Acc: 41.429% (290/700) 7/7
Epoch: 77
[==============================================================>..] Step: 785ms | Tot: 58s454ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 4, 0, 0, 3, 0, 6, 1, 1, 0, 0, 3, 3, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
0, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 1, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 4, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
5, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 3, 0, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 4, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 5, 3, 5, 3, 5, 2, 1, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 5, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 5, 5, 5, 1, 3, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s747ms | Tot: 10s372ms | Loss: 2.824 | Acc: 41.857% (293/700) 7/7
Epoch: 78
[==============================================================>..] Step: 801ms | Tot: 58s208ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 4, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
0, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 1, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
5, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 0, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 2, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 6, 2, 0, 1, 4, 1, 6, 4, 6, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 0, 1, 1, 3, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s684ms | Tot: 10s240ms | Loss: 2.809 | Acc: 41.429% (290/700) 7/7
Epoch: 79
[==============================================================>..] Step: 781ms | Tot: 58s428ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 6, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 0, 0,
0, 0, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 6, 5, 1, 2, 1, 0, 4, 1, 1, 4, 0, 0, 5, 4, 1, 5, 1, 0, 6, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
0, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 0, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 2, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 6, 2, 0, 1, 4, 1, 6, 4, 6, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 0, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 2, 6, 6, 3, 6, 6, 0, 0, 1, 2, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s769ms | Tot: 10s250ms | Loss: 2.799 | Acc: 41.429% (290/700) 7/7
Epoch: 80
[==============================================================>..] Step: 801ms | Tot: 58s304ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 0, 0,
0, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 5,
5, 5, 4, 5, 1, 3, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 4,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 5, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 2, 1, 5, 5, 3, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 6, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 0, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s728ms | Tot: 10s368ms | Loss: 2.747 | Acc: 41.714% (292/700) 7/7
Epoch: 81
[==============================================================>..] Step: 901ms | Tot: 58s341ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 6, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 0, 0,
0, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 1, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 4, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 5, 4, 1, 0, 5,
5, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 5, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 6], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s698ms | Tot: 10s205ms | Loss: 2.746 | Acc: 41.286% (289/700) 7/7
Epoch: 82
[==============================================================>..] Step: 775ms | Tot: 58s264ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 5, 4, 1, 0, 5,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 6, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
0, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 5, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 2, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 0, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s769ms | Tot: 10s270ms | Loss: 2.720 | Acc: 41.429% (290/700) 7/7
Epoch: 83
[==============================================================>..] Step: 850ms | Tot: 58s352ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 4, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 5,
5, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 6, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 6, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 4, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 2, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 0, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s700ms | Tot: 10s212ms | Loss: 2.688 | Acc: 41.286% (289/700) 7/7
Epoch: 84
[==============================================================>..] Step: 802ms | Tot: 58s216ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 5, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 6, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 2, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 0, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s728ms | Tot: 10s233ms | Loss: 2.690 | Acc: 41.000% (287/700) 7/7
Epoch: 85
[==============================================================>..] Step: 774ms | Tot: 58s482ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 5,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 3, 6, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 2, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 0, 1, 6, 3, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s696ms | Tot: 10s203ms | Loss: 2.670 | Acc: 41.286% (289/700) 7/7
Epoch: 86
[==============================================================>..] Step: 835ms | Tot: 58s281ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 0, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 4, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s701ms | Tot: 10s198ms | Loss: 2.663 | Acc: 40.714% (285/700) 7/7
Epoch: 87
[==============================================================>..] Step: 763ms | Tot: 58s18ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
0, 0, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 0, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 6, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 6, 4, 1, 1,
5, 1, 5, 6, 3, 5, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 0, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
0, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s750ms | Tot: 10s255ms | Loss: 2.644 | Acc: 41.286% (289/700) 7/7
Epoch: 88
[==============================================================>..] Step: 776ms | Tot: 58s137ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 1, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 5, 5, 3, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s707ms | Tot: 10s166ms | Loss: 2.640 | Acc: 40.714% (285/700) 7/7
Epoch: 89
[==============================================================>..] Step: 745ms | Tot: 58s495ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 0, 0, 1, 0, 0, 5, 0,
0, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 6, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 3, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
0, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 2, 1, 5, 5, 5, 5, 3, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 6, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 0, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s715ms | Tot: 10s254ms | Loss: 2.616 | Acc: 41.143% (288/700) 7/7
Epoch: 90
[==============================================================>..] Step: 783ms | Tot: 58s20ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 5, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 3, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 1, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 1, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s717ms | Tot: 10s219ms | Loss: 2.643 | Acc: 40.857% (286/700) 7/7
Epoch: 91
[==============================================================>..] Step: 779ms | Tot: 58s106ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 1, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 6, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 0, 1, 2, 1, 5, 6, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 3, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 2, 1, 5, 1, 3, 5, 3, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s721ms | Tot: 10s202ms | Loss: 2.606 | Acc: 40.286% (282/700) 7/7
Epoch: 92
[==============================================================>..] Step: 846ms | Tot: 58s124ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 0, 0, 0, 6, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 0, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 1, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 3, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 1, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 1, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 1, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 0, 1, 2, 1, 5, 0, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 4, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 3, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 6, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 4, 3, 6, 0, 1, 6, 6, 1, 1, 5, 6,
1, 0, 0, 0, 2, 6, 2, 3, 6, 1, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s754ms | Tot: 10s216ms | Loss: 2.600 | Acc: 41.429% (290/700) 7/7
Epoch: 93
[==============================================================>..] Step: 769ms | Tot: 58s494ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 6, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 3, 4, 4, 0, 0, 1,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 4, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 2, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 6, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 0, 1, 2, 1, 3, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 1, 5, 1, 2, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 3, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 1, 1, 5, 5, 5, 5, 3, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([6, 4, 3, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 6, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s681ms | Tot: 10s214ms | Loss: 2.572 | Acc: 40.571% (284/700) 7/7
Epoch: 94
[==============================================================>..] Step: 750ms | Tot: 58s262ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 0, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([5, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 3, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 5, 1, 0, 1, 0, 5, 5, 1, 6, 1, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 2, 5, 4, 0, 1, 1, 5, 4, 1, 0, 0,
5, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 3, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 0, 1, 2, 1, 5, 0, 0, 1, 6, 0, 1, 1,
3, 0, 1, 3, 3, 6, 5, 5, 1, 2, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 3, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 4, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 4, 3, 6, 0, 1, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s715ms | Tot: 10s249ms | Loss: 2.578 | Acc: 41.000% (287/700) 7/7
Epoch: 95
[==============================================================>..] Step: 776ms | Tot: 58s102ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 0, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 4, 0, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 0, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 1, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 4, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 2, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 0, 1, 2, 1, 5, 0, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 1, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 4, 1, 1, 1, 1, 1, 6, 5, 0,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 4, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 1, 1, 5, 5, 5, 0, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 0, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 4,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s726ms | Tot: 10s224ms | Loss: 2.577 | Acc: 40.857% (286/700) 7/7
Epoch: 96
[==============================================================>..] Step: 819ms | Tot: 58s105ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 5, 1, 0, 1, 0, 5, 5, 1, 6, 1, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 2, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 3, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 0, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 1, 6, 1, 5, 1, 2, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 6, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s713ms | Tot: 10s328ms | Loss: 2.570 | Acc: 41.000% (287/700) 7/7
Epoch: 97
[==============================================================>..] Step: 764ms | Tot: 58s232ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 0, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 0, 0, 5, 1, 1, 0, 1, 0, 0, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 6, 1, 5, 0, 1, 1, 3, 1, 4, 1, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 1, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 0, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 1, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 3, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 3, 1, 4, 1, 1,
5, 1, 5, 6, 3, 1, 3, 1, 1, 0, 1, 5, 0, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 1, 6, 1, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 0, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 0, 1, 1, 0, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 0, 0, 0, 1, 1, 5, 5, 5, 0, 3, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 1, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 1, 5, 5, 3, 1, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 1, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s797ms | Tot: 10s302ms | Loss: 2.573 | Acc: 41.286% (289/700) 7/7
Epoch: 98
[==============================================================>..] Step: 880ms | Tot: 58s394ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 0, 0, 0, 0, 0, 6, 1, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 6, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 0, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 5, 1, 0, 1, 0, 5, 5, 1, 6, 1, 3,
1, 2, 1, 6, 1, 0, 1, 6, 1, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 2, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 0, 6, 4, 1, 1,
5, 1, 5, 6, 3, 1, 4, 1, 1, 0, 1, 5, 0, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 5, 5, 1, 2, 1, 2, 1, 1, 0, 4, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 1, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
5, 1, 4, 4, 1, 4, 3, 4, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 6, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 5, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 6, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s706ms | Tot: 10s175ms | Loss: 2.547 | Acc: 41.429% (290/700) 7/7
Epoch: 99
[==============================================================>..] Step: 782ms | Tot: 58s172ms | Loss: 0.001 | Acc: 100.000% (3500/3500) 28/28
tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0], device='cuda:0') tensor([0, 6, 0, 3, 0, 6, 0, 0, 0, 0, 6, 3, 1, 0, 0, 3, 1, 0, 2, 1, 3, 0, 5, 1,
5, 1, 4, 6, 0, 0, 2, 6, 0, 0, 1, 0, 5, 0, 0, 2, 2, 0, 3, 3, 0, 1, 0, 1,
5, 0, 0, 0, 2, 0, 1, 4, 0, 2, 0, 3, 0, 3, 0, 5, 1, 0, 5, 2, 0, 0, 3, 0,
2, 2, 0, 5, 1, 1, 0, 1, 0, 6, 1, 1, 5, 5, 2, 3, 3, 1, 0, 1, 0, 0, 5, 0,
5, 5, 0, 1], device='cuda:0')
tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1], device='cuda:0') tensor([1, 6, 1, 5, 0, 1, 1, 3, 1, 4, 4, 2, 4, 1, 5, 0, 5, 5, 1, 4, 4, 0, 0, 0,
6, 4, 0, 1, 4, 6, 0, 6, 1, 6, 0, 6, 0, 0, 1, 0, 1, 0, 5, 5, 1, 6, 3, 3,
1, 2, 1, 6, 1, 0, 1, 6, 0, 1, 2, 2, 2, 0, 0, 4, 2, 5, 4, 1, 3, 6, 4, 5,
5, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 4, 1, 0, 4, 4, 6, 6, 4, 2, 0, 5, 1,
6, 0, 5, 0], device='cuda:0')
tensor([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2], device='cuda:0') tensor([2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 5,
1, 2, 2, 2, 2, 2, 1, 2, 0, 2, 5, 5, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2,
2, 2, 0, 2], device='cuda:0')
tensor([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
3, 3, 3, 3], device='cuda:0') tensor([0, 1, 3, 5, 1, 6, 3, 5, 6, 5, 0, 6, 4, 2, 5, 4, 0, 1, 1, 3, 4, 1, 0, 0,
0, 5, 4, 5, 1, 2, 1, 0, 4, 1, 1, 4, 1, 0, 5, 4, 1, 5, 1, 3, 6, 4, 1, 1,
5, 1, 5, 6, 3, 1, 3, 1, 1, 0, 1, 5, 0, 1, 2, 1, 5, 6, 0, 1, 6, 0, 6, 1,
3, 0, 1, 3, 3, 6, 5, 5, 1, 3, 1, 2, 1, 1, 0, 3, 1, 1, 1, 1, 1, 6, 5, 3,
5, 5, 1, 1], device='cuda:0')
tensor([4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
4, 4, 4, 4], device='cuda:0') tensor([4, 6, 4, 4, 4, 4, 1, 1, 4, 4, 1, 4, 4, 4, 4, 3, 3, 5, 5, 3, 3, 1, 4, 6,
4, 4, 4, 6, 6, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 5, 1, 6, 5, 4, 5, 4, 1,
4, 4, 1, 0, 6, 4, 4, 3, 2, 4, 4, 1, 5, 3, 4, 4, 4, 5, 4, 4, 4, 4, 4, 3,
3, 1, 4, 4, 1, 4, 3, 6, 4, 1, 4, 4, 4, 6, 6, 4, 6, 0, 0, 1, 5, 4, 2, 4,
1, 4, 4, 1], device='cuda:0')
tensor([5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5,
5, 5, 5, 5], device='cuda:0') tensor([5, 5, 5, 1, 3, 1, 1, 5, 1, 3, 5, 3, 5, 2, 3, 1, 0, 1, 5, 5, 0, 6, 5, 5,
5, 5, 5, 1, 5, 1, 1, 6, 5, 0, 0, 1, 1, 5, 5, 5, 5, 5, 1, 5, 3, 5, 5, 0,
5, 0, 5, 1, 5, 1, 0, 5, 1, 1, 0, 0, 5, 5, 2, 3, 5, 2, 1, 4, 1, 1, 5, 5,
5, 5, 5, 5, 2, 4, 5, 0, 0, 1, 1, 5, 5, 5, 5, 3, 1, 5, 5, 5, 5, 5, 4, 5,
2, 1, 5, 2], device='cuda:0')
tensor([6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6,
6, 6, 6, 6], device='cuda:0') tensor([0, 4, 3, 1, 2, 1, 3, 2, 0, 2, 1, 6, 3, 1, 2, 0, 1, 4, 1, 6, 4, 1, 3, 6,
2, 6, 0, 4, 0, 0, 6, 2, 0, 6, 6, 5, 5, 5, 5, 1, 6, 1, 6, 0, 6, 1, 0, 3,
1, 0, 2, 1, 1, 6, 3, 6, 0, 0, 4, 2, 5, 6, 3, 6, 0, 1, 6, 6, 1, 1, 5, 6,
1, 3, 0, 0, 2, 6, 2, 3, 6, 6, 0, 0, 1, 3, 3, 6, 0, 3, 6, 1, 2, 6, 2, 4,
2, 4, 3, 6], device='cuda:0')
[=======================================================>.........] Step: 1s728ms | Tot: 10s312ms | Loss: 2.531 | Acc: 41.143% (288/700) 7/7
|
documents/regression_using_scikit_learn.ipynb | ###Markdown
Regression using Scikit Learn Explore the relationship between model complexity and generalization performance, by adjusting key parameters of various supervised learning models. Regression
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
# Create Data
np.random.seed(0)
n = 15
x = np.linspace(0,10,n) + np.random.randn(n)/5
y = np.sin(x)+x/6 + np.random.randn(n)/10
x
y
# Split the data
X_train, X_test, y_train, y_test = train_test_split(x,y, random_state=0)
###Output
_____no_output_____
###Markdown
You can use this function to help you visualize the dataset by plotting a scatterplot of the data points in the training and test sets.Visualise using a plot categorised by training data and test data.
###Code
def part1_scatter():
import matplotlib.pyplot as plt
get_ipython().magic('matplotlib notebook')
plt.figure()
plt.scatter(X_train, y_train, label='trainingdata')
plt.scatter(X_test, y_test, label='test data')
plt.legend(loc=4);
part1_scatter()
###Output
_____no_output_____
###Markdown
Write a function that fits a polynomial LinearRegression model on the *training data* `X_train` for degrees 1, 3, 6, and 9. (Use PolynomialFeatures insklearn.preprocessing to create the polynomial features and then fit a linear regression model)• For each model, find 100 predicted values over the interval x = 0 to 10 (e.g. `np.linspace(0,10,100)`)and store this in a numpy array.• The first row of this array should correspond to the output from the model trained on degree 1, the second row degree 3, the third row degree 6, and the fourth row degree 9.
###Code
def answer_one():
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
result = np.zeros((4,100))
# Your code here
for i, degree in enumerate([1,3,6,9]):
poly = PolynomialFeatures(degree=degree)
X_poly = poly.fit_transform(X_train.reshape(11,1))
linreg = LinearRegression().fit(X_poly, y_train)
y = linreg.predict(poly.fit_transform(np.linspace(0,10,100).reshape(100,1)))
result[i,:] = y
return result # Return your answer
answer_one()
# feel free to use the function plot_one() to replicate the figure
# from the prompt once you have completed question one
def plot_one(degree_predictions):
import matplotlib.pyplot as plt
get_ipython().magic('matplotlib notebook')
plt.figure(figsize=(10,5))
plt.plot(X_train, y_train, 'o', label='training data', markersize=10)
plt.plot(X_test, y_test, 'o', label='test data', markersize=10)
for i,degree in enumerate([1,3,6,9]):
plt.plot(np.linspace(0,10,100), degree_predictions[i], alpha=0.8, lw=2, label='degree={}'.format(degree))
plt.ylim(-1,2.5)
plt.legend(loc=4)
plot_one(answer_one())
###Output
_____no_output_____
###Markdown
Write a function that fits a polynomial LinearRegression model on the training data `X_train` for degrees 0 through 9. For each model compute the $R^2$ (coefficient of determination) regression score on the training data as well as the the test data, and return both of these arrays in a tuple. *This function should return one tuple of numpy arrays `(r2_train, r2_test)`. Both arrays should have shape `(10,)`*
###Code
def answer_two():
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import r2_score
r2_train = np.zeros(10)
r2_test = np.zeros(10)
# Your code here
for i in range(10):
poly = PolynomialFeatures(degree=i)
# Train and score x_train
X_poly = poly.fit_transform(X_train.reshape(11,1))
linreg = LinearRegression().fit(X_poly, y_train)
r2_train[i] = linreg.score(X_poly, y_train);
# Score x_test (do not train)
X_test_poly = poly.fit_transform(X_test.reshape(4,1))
r2_test[i] = linreg.score(X_test_poly, y_test)
return (r2_train, r2_test)
answer_two()
###Output
_____no_output_____
###Markdown
Based on the $R^2$ scores from above (degree levels 0 through 9), what degree level corresponds to a model that is underfitting? What degree level corresponds to a model that is overfitting? What choice of degree level would provide a model with good generalization performance on this dataset? Note: there may be multiple correct solutions to this. (Hint: Try plotting the $R^2$ scores from above to visualize the relationship between degree level and $R^2$) *This function should return one tuple with the degree values in this order: `(Underfitting, Overfitting, Good_Generalization)`*
###Code
def answer_three():
r2_scores = answer_two()
df = pd.DataFrame({'training_score':r2_scores[0], 'test_score':r2_scores[1]})
df['diff'] = df['training_score'] - df['test_score']
df = df.sort_values(['diff'])
good_gen = df.index[0]
df = df.sort_values(['diff'], ascending = False)
overfitting = df.index[0]
df = df.sort_values(['training_score'])
underfitting = df.index[0]
return (underfitting,overfitting,good_gen)
answer_three()
###Output
_____no_output_____ |
Model backlog/Models/Inference/132-cassava-leaf-inf-effnetb3-scl-imagenet-384x384.ipynb | ###Markdown
Dependencies
###Code
# !pip install --quiet /kaggle/input/kerasapplications
# !pip install --quiet /kaggle/input/efficientnet-git
import warnings, glob
from tensorflow.keras import Sequential, Model
# import efficientnet.tfkeras as efn
from cassava_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Hardware configuration
###Code
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
###Output
REPLICAS: 1
###Markdown
Model parameters
###Code
BATCH_SIZE = 8 * REPLICAS
HEIGHT = 384
WIDTH = 384
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 0 # Do TTA if > 0
###Output
_____no_output_____
###Markdown
Augmentation
###Code
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# # Pixel-level transforms
# if p_pixel_1 >= .4:
# image = tf.image.random_saturation(image, lower=.7, upper=1.3)
# if p_pixel_2 >= .4:
# image = tf.image.random_contrast(image, lower=.8, upper=1.2)
# if p_pixel_3 >= .4:
# image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .7:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
elif p_crop > .4:
crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
# # Crops
# if p_crop > .6:
# if p_crop > .9:
# image = tf.image.central_crop(image, central_fraction=.5)
# elif p_crop > .8:
# image = tf.image.central_crop(image, central_fraction=.6)
# elif p_crop > .7:
# image = tf.image.central_crop(image, central_fraction=.7)
# else:
# image = tf.image.central_crop(image, central_fraction=.8)
# elif p_crop > .3:
# crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
# image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
return image, label
###Output
_____no_output_____
###Markdown
Auxiliary functions
###Code
# Datasets utility functions
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
# img, _ = scale_image(img, None)
# img = center_crop(img, HEIGHT, WIDTH)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/132-cassava-leaf-effnetb3-scl-imagenet-384x384/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
###Output
Models to predict:
/kaggle/input/132-cassava-leaf-effnetb3-scl-imagenet-384x384/model_0.h5
###Markdown
Model
###Code
class UnitNormLayer(L.Layer):
"""
Normalize vectors (euclidean norm) in batch to unit hypersphere.
"""
def __init__(self, **kwargs):
super(UnitNormLayer, self).__init__(**kwargs)
def call(self, input_tensor):
norm = tf.norm(input_tensor, axis=1)
return input_tensor / tf.reshape(norm, [-1, 1])
def encoder_fn(input_shape):
inputs = L.Input(shape=input_shape, name='input_image')
# base_model = efn.EfficientNetB3(input_tensor=inputs,
base_model = tf.keras.applications.EfficientNetB3(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
norm_embeddings = UnitNormLayer()(base_model.output)
model = Model(inputs=inputs, outputs=norm_embeddings)
return model
def classifier_fn(input_shape, N_CLASSES, encoder, trainable=True):
for layer in encoder.layers:
layer.trainable = trainable
unfreeze_model(encoder) # unfreeze all layers except "batch normalization"
inputs = L.Input(shape=input_shape, name='input_image')
features = encoder(inputs)
features = L.Dropout(.5)(features)
features = L.Dense(512, activation='relu')(features)
features = L.Dropout(.5)(features)
output = L.Dense(N_CLASSES, activation='softmax', name='output')(features)
output_healthy = L.Dense(1, activation='sigmoid', name='output_healthy')(features)
output_cmd = L.Dense(1, activation='sigmoid', name='output_cmd')(features)
model = Model(inputs=inputs, outputs=[output, output_healthy, output_cmd])
return model
with strategy.scope():
encoder = encoder_fn((None, None, CHANNELS))
model = classifier_fn((None, None, CHANNELS), N_CLASSES, encoder, trainable=True)
model.summary()
###Output
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_image (InputLayer) [(None, None, None, 0
__________________________________________________________________________________________________
model (Functional) (None, 1536) 10783535 input_image[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (None, 1536) 0 model[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 512) 786944 dropout[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 512) 0 dense[0][0]
__________________________________________________________________________________________________
output (Dense) (None, 5) 2565 dropout_1[0][0]
__________________________________________________________________________________________________
output_healthy (Dense) (None, 1) 513 dropout_1[0][0]
__________________________________________________________________________________________________
output_cmd (Dense) (None, 1) 513 dropout_1[0][0]
==================================================================================================
Total params: 11,574,070
Trainable params: 11,399,471
Non-trainable params: 174,599
__________________________________________________________________________________________________
###Markdown
Test set predictions
###Code
files_path = f'{database_base_path}test_images/'
test_size = len(os.listdir(files_path))
test_preds = np.zeros((test_size, N_CLASSES))
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True).repeat()
ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)
preds = model.predict(test_ds, steps=ct_steps, verbose=1)[0][:(test_size * TTA_STEPS)]
preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)
test_preds += preds / len(model_path_list)
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test)[0] / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
test_names_ds = get_dataset(files_path)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
###Output
_____no_output_____ |
Python assignments 1 (4).ipynb | ###Markdown
DICTIONARY AND ITS DEFAULT FUNCTIONS
###Code
dict = {"name" : "Shiv", "age": "23", "email" : "@abc", "Address" : "Pune"}
dict
dict.items()
dict.keys()
dict.clear()
dict
dict.get("name")
dict
###Output
_____no_output_____
###Markdown
TUPLE AND ITS DEFAULT FUNCTIONS
###Code
tuple=("Raghav","@","Letsupgrade")
tuple
tuple.count("@")
tuple.index("Raghav")
###Output
_____no_output_____
###Markdown
SETS AND ITS DEFAULT FUNCTIONS
###Code
st={"Apple","Mango",1,2,3,4,5,5,2}
st
st1={"Apple",5}
st1
st1.issubset(st)
st1.isdisjoint(st)
st1.difference(st)
st1.issuperset(st)
###Output
_____no_output_____
###Markdown
STRINGS
###Code
fruit= "Watermelon"
vegetable="Spinach"
fruit
vegetable
fruit +" "+vegetable
type("fruit")
type("vegetable")
###Output
_____no_output_____ |
results_graphs.ipynb | ###Markdown
This notebook is for Google Colab. It contains the script to buil graphs from the experiment results.
###Code
# mount drive
from google.colab import drive
drive.mount('/content/drive')
!pwd
!rm -rf Liubov_Tovbin
%mkdir Liubov_Tovbin
%cd Liubov_Tovbin
!git clone https://github.com/LubaTovbin/CMPE-260
%cd CMPE-260
!pip install git+https://github.com/quantopian/pyfolio
# Source:
# https://github.com/AI4Finance-LLC/Deep-Reinforcement-Learning-for-Automated-Stock-Trading-Ensemble-Strategy-ICAIF-2020
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import pyfolio
import zipfile
with zipfile.ZipFile('original_and_modified_models_results.zip', 'r') as zip_ref:
zip_ref.extractall()
def backtest_strat(df):
strategy_ret= df.copy()
strategy_ret['Date'] = pd.to_datetime(strategy_ret['Date'])
strategy_ret.set_index('Date', drop = False, inplace = True)
strategy_ret.index = strategy_ret.index.tz_localize('UTC')
del strategy_ret['Date']
ts = pd.Series(strategy_ret['daily_return'].values, index=strategy_ret.index)
return ts
# Concatenate all the account_value_trade data files
# 18 data files, 63 records each
# 18*63 = 1134 entries
def get_account_value(path, model_name):
df_account_value=pd.DataFrame()
for i in range(rebalance_window+validation_window, len(unique_trade_date)+1,rebalance_window):
temp = pd.read_csv(path + '/account_value_trade_{}_{}.csv'.format(model_name,i))
df_account_value = df_account_value.append(temp,ignore_index=True)
df_account_value = pd.DataFrame({'account_value':df_account_value['0']})
sharpe=(252**0.5)*df_account_value.account_value.pct_change(1).mean()/df_account_value.account_value.pct_change(1).std()
print('Sharpe ratio for = ',sharpe)
df_account_value=df_account_value.join(df_trade_date[63:].reset_index(drop=True))
return df_account_value
def get_daily_return(df):
df['daily_return']=df.account_value.pct_change(1)
#print('Sharpe: ',(252**0.5)*df['daily_return'].mean()/ df['daily_return'].std())
return df
# read the whole data set
dji = pd.read_csv("data/^DJI.csv")
# Take the data between 01/01/2016 and 06/30/2020 for testing
test_dji=dji[(dji['Date']>='2016-01-01') & (dji['Date']<='2020-06-30')]
test_dji = test_dji.reset_index(drop=True)
test_dji['daily_return']=test_dji['Adj Close'].pct_change(1)
dow_strat = backtest_strat(test_dji)
###Output
_____no_output_____
###Markdown
Ensenmble strategy
###Code
df=pd.read_csv('data/dow_30_2009_2020.csv')
rebalance_window = 63
validation_window = 63
unique_trade_date = df[(df.datadate > 20151001)&(df.datadate <= 20200707)].datadate.unique()
# Add the date column
df_trade_date = pd.DataFrame({'datadate':unique_trade_date})
###Output
_____no_output_____
###Markdown
Retrive and concatemate all the account_value_trade files
###Code
%cd original_and_modified_models_results
ensemble_account_value_orig = get_account_value('original/results', "ensemble")
ensemble_account_value_turb_holdbuy = get_account_value('turb_hold_and_buy/results', "ensemble")
ensemble_account_value_turb_hold = get_account_value('turb_hold/results', "ensemble")
ensemble_account_value_short_vec = get_account_value('short_env_vec/results', "ensemble")
ensemble_account_value_lstm = get_account_value('MlpLstm/results', "ensemble")
ensemble_account_value_sortino = get_account_value('Sortino/results', "ensemble")
ensemble_account_value_multi_ppo = get_account_value('Multi-PPO/results', "ensemble")
fig = plt.figure(figsize=(14,7))
ensemble_account_value_orig.account_value.plot(label='original')
ensemble_account_value_short_vec.account_value.plot(label='shorter environment state vector')
ensemble_account_value_lstm.account_value.plot(label='LSTM policy for A2C')
ensemble_account_value_turb_holdbuy.account_value.plot(label='turbulence: hold & buy more')
ensemble_account_value_turb_hold.account_value.plot(label='turbulence: hold')
ensemble_account_value_sortino.account_value.plot(label='Sortino ratio')
ensemble_account_value_multi_ppo.account_value.plot(label='multi-PPO')
plt.ylabel('Account Value Over Time')
plt.legend()
models = [ensemble_account_value_orig,
ensemble_account_value_short_vec,
ensemble_account_value_lstm,
ensemble_account_value_turb_holdbuy,
ensemble_account_value_turb_hold,
ensemble_account_value_sortino,
ensemble_account_value_multi_ppo
]
for model in models:
model = get_daily_return(model)
model['Date'] = test_dji['Date']
ensemble_account_value_orig.info()
ensemble_account_value_orig.head()
ensemble_strat_models = [backtest_strat(model[0:1097]) for model in models]
with pyfolio.plotting.plotting_context(font_scale=1.1):
pyfolio.create_full_tear_sheet(returns = ensemble_strat_models[0],
benchmark_rets=dow_strat,
set_context=False)
###Output
_____no_output_____ |
notebooks/ptyhonVideoForBigDataFiles/V09804_Code/Section 2/Video 2.2.ipynb | ###Markdown
Import pymongo
###Code
import pymongo
from pymongo import MongoClient
###Output
_____no_output_____
###Markdown
Connect to database
###Code
client=MongoClient('localhost')
# Could connect to a DB with any arbitrary URI or port number here
db=client.packt
# This is lazy: packt database doesn't exist yet!
testCollection=db.testCollection
# This is also lazy: testCollection doesn't exist yet!
###Output
_____no_output_____
###Markdown
Insert some documents
###Code
import random,string
letters=list(string.lowercase)
letters[0:5]
###Output
_____no_output_____
###Markdown
Insert many together
###Code
res=testCollection.insert_many([{random.choice(letters):random.randint(1,10)} for i in range(10)])
###Output
_____no_output_____
###Markdown
Retrieve Documents with a Cursor
###Code
cur=testCollection.find()
type(cur)
cur.count()
for doc in cur:
print doc
cur.explain()
###Output
_____no_output_____
###Markdown
Good practice to close cursors when finished
###Code
cur.alive
cur.close()
db.testCollection.drop()
###Output
_____no_output_____ |
docs/notebooks/InformationFlow.ipynb | ###Markdown
Tracking Information FlowWe have explored how one could generate better inputs that can penetrate deeper into the program in question. While doing so, we have relied on program crashes to tell us that we have succeeded in finding problems in the program. However, that is rather simplistic. What if the behavior of the program is simply incorrect, but does not lead to a crash? Can one do better?In this chapter, we explore in depth how to track information flows in Python, and how these flows can be used to determine whether a program behaved as expected. **Prerequisites*** You should have read the [chapter on coverage](Coverage.ipynb).* You should have read the [chapter on probabilistic fuzzing](ProbabilisticGrammarFuzzer.ipynb). We first set up our infrastructure so that we can make use of previously defined functions.
###Code
import bookutils
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.InformationFlow import ```and then make use of the following features.This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string.For tracking information on security properties, use `tstr` as follows:```python>>> thello = tstr('hello', taint='LOW')```Now, any operation from `thello` that results in a string fragment would include the correct taint. For example:```python>>> thello[1:2].taint'LOW'```For tracking the originating indexes from the input string, use `ostr` as follows:```python>>> ohw = ostr("hello\tworld", origin=100)```The originating indexes can be recovered as follows:```python>>> (ohw[0:4] +"-"+ ohw[6:]).origin[100, 101, 102, 103, -1, 106, 107, 108, 109, 110]``` A Vulnerable DatabaseSay we want to implement an *in-memory database* service in Python. Here is a rather flimsy attempt. We use the following dataset.
###Code
INVENTORY = """\
1997,van,Ford,E350
2000,car,Mercury,Cougar
1999,car,Chevy,Venture\
"""
VEHICLES = INVENTORY.split('\n')
###Output
_____no_output_____
###Markdown
Our DB is a Python class that parses its arguments and throws `SQLException` which is defined below.
###Code
class SQLException(Exception):
pass
###Output
_____no_output_____
###Markdown
The database is simply a Python `dict` that is exposed only through SQL queries.
###Code
class DB:
def __init__(self, db={}):
self.db = dict(db)
###Output
_____no_output_____
###Markdown
Representing TablesThe database contains tables, which are created by a method call `create_table()`. Each table data structure is a pair of values. The first one is the meta data containing column names and types. The second value is a list of values in the table.
###Code
class DB(DB):
def create_table(self, table, defs):
self.db[table] = (defs, [])
###Output
_____no_output_____
###Markdown
The table can be retrieved using the name using the `table()` method call.
###Code
class DB(DB):
def table(self, t_name):
if t_name in self.db:
return self.db[t_name]
raise SQLException('Table (%s) was not found' % repr(t_name))
###Output
_____no_output_____
###Markdown
Here is an example of how to use both. We fill a table `inventory` with four columns: `year`, `kind`, `company`, and `model`. Initially, our table is empty.
###Code
def sample_db():
db = DB()
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
return db
###Output
_____no_output_____
###Markdown
Using `table()`, we can retrieve the table definition as well as its contents.
###Code
db = sample_db()
db.table('inventory')
###Output
_____no_output_____
###Markdown
We also define `column()` for retrieving the column definition from a table declaration.
###Code
class DB(DB):
def column(self, table_decl, c_name):
if c_name in table_decl:
return table_decl[c_name]
raise SQLException('Column (%s) was not found' % repr(c_name))
db = sample_db()
decl, rows = db.table('inventory')
db.column(decl, 'year')
###Output
_____no_output_____
###Markdown
Executing SQL StatementsThe `sql()` method of `DB` executes SQL statements. It inspects its arguments, and dispatches the query based on the kind of SQL statement to be executed.
###Code
class DB(DB):
def do_select(self, query):
assert False
def do_update(self, query):
assert False
def do_insert(self, query):
assert False
def do_delete(self, query):
assert False
def sql(self, query):
methods = [('select ', self.do_select),
('update ', self.do_update),
('insert into ', self.do_insert),
('delete from', self.do_delete)]
for key, method in methods:
if query.startswith(key):
return method(query[len(key):])
raise SQLException('Unknown SQL (%s)' % query)
###Output
_____no_output_____
###Markdown
At this point, the individual methods for handling SQL statements are not yet defined. Let us do this in the next steps. Selecting DataThe `do_select()` method handles SQL `select` statements to retrieve data from a table.
###Code
class DB(DB):
def do_select(self, query):
FROM, WHERE = ' from ', ' where '
table_start = query.find(FROM)
if table_start < 0:
raise SQLException('no table specified')
where_start = query.find(WHERE)
select = query[:table_start]
if where_start >= 0:
t_name = query[table_start + len(FROM):where_start]
where = query[where_start + len(WHERE):]
else:
t_name = query[table_start + len(FROM):]
where = ''
_, table = self.table(t_name)
if where:
selected = self.expression_clause(table, "(%s)" % where)
selected_rows = [hm for i, data, hm in selected if data]
else:
selected_rows = table
rows = self.expression_clause(selected_rows, "(%s)" % select)
return [data for i, data, hm in rows]
###Output
_____no_output_____
###Markdown
The `expression_clause()` method is used for two purposes:1. In the form `select` $x$, $y$, $z$ `from` $t$, it _evaluates_ (and returns) the expressions $x$, $y$, $z$ in the contexts of the selected rows.2. If a clause `where` $p$ is given, it also evaluates $p$ in the context of the rows and includes the rows in the selection only if $p$ holds.To evaluate expressions like $x$, $y$, $z$ or $p$, we make use of the Python evaluation function.
###Code
class DB(DB):
def expression_clause(self, table, statement):
selected = []
for i, hm in enumerate(table):
selected.append((i, self.my_eval(statement, {}, hm), hm))
return selected
###Output
_____no_output_____
###Markdown
Which internally calls `my_eval()` to evaluate any given statement.
###Code
class DB(DB):
def my_eval(self, statement, g, l):
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid WHERE (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
**Note:** Using `eval()` here introduces some important security issues, which we will discuss later in this chapter. Here's how we can use `sql()` to issue a query. Note that the table is yet empty.
###Code
db = sample_db()
db.sql('select year from inventory')
db = sample_db()
db.sql('select year from inventory where year == 2018')
###Output
_____no_output_____
###Markdown
Inserting DataThe `do_insert()` method handles SQL `insert` statements.
###Code
class DB(DB):
def do_insert(self, query):
VALUES = ' values '
table_end = query.find('(')
t_name = query[:table_end].strip()
names_end = query.find(')')
decls, table = self.table(t_name)
names = [i.strip() for i in query[table_end + 1:names_end].split(',')]
# verify columns exist
for k in names:
self.column(decls, k)
values_start = query.find(VALUES)
if values_start < 0:
raise SQLException('Invalid INSERT (%s)' % repr(query))
values = [
i.strip() for i in query[values_start + len(VALUES) + 1:-1].split(',')
]
if len(names) != len(values):
raise SQLException(
'names(%s) != values(%s)' % (repr(names), repr(values)))
# dict lookups happen in C code, so we cant use that
kvs = {}
for k,v in zip(names, values):
for key,kval in decls.items():
if k == key:
kvs[key] = self.convert(kval, v)
table.append(kvs)
###Output
_____no_output_____
###Markdown
In SQL, a column can come in any supported data type. To ensure it is stored using the type originally declared, we need the ability to convert the values to specific types which is provided by `convert()`.
###Code
import ast
class DB(DB):
def convert(self, cast, value):
try:
return cast(ast.literal_eval(value))
except:
raise SQLException('Invalid Conversion %s(%s)' % (cast, value))
###Output
_____no_output_____
###Markdown
Here is an example of how to use the SQL `insert` command:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.table('inventory')
###Output
_____no_output_____
###Markdown
With the database filled, we can also run more complex queries:
###Code
db.sql('select year + 1, kind from inventory')
db.sql('select year, kind from inventory where year == 1997')
###Output
_____no_output_____
###Markdown
Updating DataSimilarly, `do_update()` handles SQL `update` statements.
###Code
class DB(DB):
def do_update(self, query):
SET, WHERE = ' set ', ' where '
table_end = query.find(SET)
if table_end < 0:
raise SQLException('Invalid UPDATE (%s)' % repr(query))
set_end = table_end + 5
t_name = query[:table_end]
decls, table = self.table(t_name)
names_end = query.find(WHERE)
if names_end >= 0:
names = query[set_end:names_end]
where = query[names_end + len(WHERE):]
else:
names = query[set_end:]
where = ''
sets = [[i.strip() for i in name.split('=')]
for name in names.split(',')]
# verify columns exist
for k, v in sets:
self.column(decls, k)
if where:
selected = self.expression_clause(table, "(%s)" % where)
updated = [hm for i, d, hm in selected if d]
else:
updated = table
for hm in updated:
for k, v in sets:
# we can not do dict lookups because it is implemetned in C.
for key, kval in decls.items():
if key == k:
hm[key] = self.convert(kval, v)
return "%d records were updated" % len(updated)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can update things:
###Code
db.sql('update inventory set year = 1998 where year == 1997')
db.sql('select year from inventory')
db.table('inventory')
###Output
_____no_output_____
###Markdown
Deleting DataFinally, SQL `delete` statements are handled by `do_delete()`.
###Code
class DB(DB):
def do_delete(self, query):
WHERE = ' where '
table_end = query.find(WHERE)
if table_end < 0:
raise SQLException('Invalid DELETE (%s)' % query)
t_name = query[:table_end].strip()
_, table = self.table(t_name)
where = query[table_end + len(WHERE):]
selected = self.expression_clause(table, "%s" % where)
deleted = [i for i, d, hm in selected if d]
for i in sorted(deleted, reverse=True):
del table[i]
return "%d records were deleted" % len(deleted)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can delete data:
###Code
db.sql('delete from inventory where company == "Ford"')
###Output
_____no_output_____
###Markdown
Our database is now empty:
###Code
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
All Methods TogetherHere is how our database can be used.
###Code
db = DB()
###Output
_____no_output_____
###Markdown
Again, we first create a table in our database with the correct data types.
###Code
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
###Output
_____no_output_____
###Markdown
Here is a simple convenience function to update the table using our dataset.
###Code
def update_inventory(sqldb, vehicle):
inventory_def = sqldb.db['inventory'][0]
k, v = zip(*inventory_def.items())
val = [repr(cast(val)) for cast, val in zip(v, vehicle.split(','))]
sqldb.sql('insert into inventory (%s) values (%s)' % (','.join(k),
','.join(val)))
for V in VEHICLES:
update_inventory(db, V)
###Output
_____no_output_____
###Markdown
Our database now contains the same dataset as `VEHICLES` under `INVENTORY` table.
###Code
db.db
###Output
_____no_output_____
###Markdown
Here is a sample select statement.
###Code
db.sql('select year,kind from inventory')
db.sql("select company,model from inventory where kind == 'car'")
###Output
_____no_output_____
###Markdown
We can run updates on it.
###Code
db.sql("update inventory set year = 1998, company = 'Suzuki' where kind == 'van'")
db.db
###Output
_____no_output_____
###Markdown
It can even do mathematics on the fly!
###Code
db.sql('select int(year)+10 from inventory')
###Output
_____no_output_____
###Markdown
Adding a new row to our table.
###Code
db.sql("insert into inventory (year, kind, company, model) values (1, 'charriot', 'Rome', 'Quadriga')")
db.db
###Output
_____no_output_____
###Markdown
Which we then delete.
###Code
db.sql("delete from inventory where year < 1900")
###Output
_____no_output_____
###Markdown
Fuzzing SQLTo verify that everything is OK, let us fuzz. First we define our grammar.
###Code
import string
EXPR_GRAMMAR = {
"<start>": ["<expr>"],
"<expr>": ["<bexpr>", "<aexpr>", "(<expr>)", "<term>"],
"<bexpr>": [
"<aexpr><lt><aexpr>",
"<aexpr><gt><aexpr>",
"<expr>==<expr>",
"<expr>!=<expr>",
],
"<aexpr>": [
"<aexpr>+<aexpr>", "<aexpr>-<aexpr>", "<aexpr>*<aexpr>",
"<aexpr>/<aexpr>", "<word>(<exprs>)", "<expr>"
],
"<exprs>": ["<expr>,<exprs>", "<expr>"],
"<lt>": ["<"],
"<gt>": [">"],
"<term>": ["<number>", "<word>"],
"<number>": ["<integer>.<integer>", "<integer>", "-<number>"],
"<integer>": ["<digit><integer>", "<digit>"],
"<word>": ["<word><letter>", "<word><digit>", "<letter>"],
"<digit>":
list(string.digits),
"<letter>":
list(string.ascii_letters + '_:.')
}
INVENTORY_GRAMMAR = dict(
EXPR_GRAMMAR, **{
'<start>': ['<query>'],
'<query>': [
'select <exprs> from <table>',
'select <exprs> from <table> where <bexpr>',
'insert into <table> (<names>) values (<literals>)',
'update <table> set <assignments> where <bexpr>',
'delete from <table> where <bexpr>',
],
'<table>': ['<word>'],
'<names>': ['<column>,<names>', '<column>'],
'<column>': ['<word>'],
'<literals>': ['<literal>', '<literal>,<literals>'],
'<literal>': ['<number>', "'<chars>'"],
'<assignments>': ['<kvp>,<assignments>', '<kvp>'],
'<kvp>': ['<column>=<value>'],
'<value>': ['<word>'],
'<chars>': ['<char>', '<char><chars>'],
'<char>':
[i for i in string.printable if i not in "<>'\"\t\n\r\x0b\x0c\x00"
] + ['<lt>', '<gt>'],
})
###Output
_____no_output_____
###Markdown
As can be seen from the source of our database, the functions always check whether the table name is correct. Hence, we modify the grammar to choose our particular table so that it will have a better chance of reaching deeper. We will see in the later sections how this can be done automatically.
###Code
INVENTORY_GRAMMAR_F = dict(INVENTORY_GRAMMAR, **{'<table>': ['inventory']})
from GrammarFuzzer import GrammarFuzzer
gf = GrammarFuzzer(INVENTORY_GRAMMAR_F)
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = db.sql(query)
print(repr(res))
except SQLException as e:
print("> ", e)
pass
except:
traceback.print_exc()
break
print()
###Output
'select O6fo,-977091.1,-36.46 from inventory'
> Invalid WHERE ('(O6fo,-977091.1,-36.46)')
'select g3 from inventory where -3.0!=V/g/b+Q*M*G'
> Invalid WHERE ('(-3.0!=V/g/b+Q*M*G)')
'update inventory set z=a,x=F_,Q=K where p(M)<_*S'
> Column ('z') was not found
'update inventory set R=L5pk where e*l*y-u>K+U(:)'
> Column ('R') was not found
'select _/d*Q+H/d(k)<t+M-A+P from inventory'
> Invalid WHERE ('(_/d*Q+H/d(k)<t+M-A+P)')
'select F5 from inventory'
> Invalid WHERE ('(F5)')
'update inventory set jWh.=a6 where wcY(M)>IB7(i)'
> Column ('jWh.') was not found
'update inventory set U=y where L(W<c,(U!=W))<V(((q)==m<F),O,l)'
> Column ('U') was not found
'delete from inventory where M/b-O*h*E<H-W>e(Y)-P'
> Invalid WHERE ('M/b-O*h*E<H-W>e(Y)-P')
'select ((kP(86)+b*S+J/Z/U+i(U))) from inventory'
> Invalid WHERE ('(((kP(86)+b*S+J/Z/U+i(U))))')
###Markdown
Fuzzing does not seem to have triggered any crashes. However, are crashes the only errors that we should be worried about? The Evil of EvalIn our implementation, we have made use of `eval()` to evaluate expressions using the Python interpreter. This allows us to unleash the full power of Python expressions within our SQL statements.
###Code
db.sql('select year from inventory where year < 2000')
###Output
_____no_output_____
###Markdown
In the above query, the clause `year < 2000` is evaluated using `expression_clause()` using Python in the context of each row; hence, `year < 2000` evaluates to either `True` or `False`. The same holds for the expressions being `select`ed:
###Code
db.sql('select year - 1900 if year < 2000 else year - 2000 from inventory')
###Output
_____no_output_____
###Markdown
This works because `year - 1900 if year < 2000 else year - 2000` is a valid Python expression. (It is not a valid SQL expression, though.) The problem with the above is that there is _no limitation_ to what the Python expression can do. What if the user tries the following?
###Code
db.sql('select __import__("os").popen("pwd").read() from inventory')
###Output
_____no_output_____
###Markdown
The above statement effectively reads from the users' file system. Instead of `os.popen("pwd").read()`, it could execute arbitrary Python commands – to access data, install software, run a background process. This is where "the full power of Python expressions" turns back on us. What we want is to allow our _program_ to make full use of its power; yet, the _user_ (or any third party) should not be entrusted to do the same. Hence, we need to differentiate between (trusted) _input from the program_ and (untrusted) _input from the user_. One method that allows such differentiation is that of *dynamic taint analysis*. The idea is to identify the functions that accept user input as *sources* that *taint* any string that comes in through them, and those functions that perform dangerous operations as *sinks*. Finally we bless certain functions as *taint sanitizers*. The idea is that an input from the source should never reach the sink without undergoing sanitization first. This allows us to use a stronger oracle than simply checking for crashes. Tracking String TaintsThere are various levels of taint tracking that one can perform. The simplest is to track that a string fragment originated in a specific environment, and has not undergone a taint removal process. For this, we simply need to wrap the original string with an environment identifier (the _taint_) with `tstr`, and produce `tstr` instances on each operation that results in another string fragment. The attribute `taint` holds a label identifying the environment this instance was derived. A Class for Tainted StringsFor capturing information flows we need a new string class. The idea is to use the new tainted string class `tstr` as a wrapper on the original `str` class. However, `str` is an *immutable* class. Hence, it does not call its `__init__()` method after being constructed. This means that any subclasses of `str` also will not get the `__init__()` method called. If we want to get our initialization routine called, we need to [hook into `__new__()`](https://docs.python.org/3/reference/datamodel.htmlbasic-customization) and return an instance of our own class. We combine this with our initialization code in `__init__()`.
###Code
class tstr(str):
def __new__(cls, value, *args, **kw):
return str.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
class tstr(tstr):
def __repr__(self):
return tstr(str.__repr__(self), taint=self.taint)
class tstr(tstr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
For example, if we wrap `"hello"` in `tstr`, then we should be able to access its taint:
###Code
thello = tstr('hello', taint='LOW')
thello.taint
repr(thello).taint
###Output
_____no_output_____
###Markdown
By default, when we wrap a string, it is tainted. Hence we also need a way to clear the taint in the string. One way is to simply return a `str` instance as above. However, one may sometimes wish to remove the taint from an existing instance. This is accomplished with `clear_taint()`. During `clear_taint()`, we simply set the taint to `None`. This method comes with a pair method `has_taint()` which checks whether a `tstr` instance is currently origined.
###Code
class tstr(tstr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
###Output
_____no_output_____
###Markdown
String OperatorsTo propagate the taint, we have to extend string functions, such as operators. We can do so in one single big step, overloading all string methods and operators. When we create a new string from an existing tainted string, we propagate its taint.
###Code
class tstr(tstr):
def create(self, s):
return tstr(s, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_str_wrapper()` function creates a wrapper around an existing string method which attaches the taint to the result of the method:
###Code
def make_str_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all string methods that return a string:
###Code
def informationflow_init_1():
for name in ['__format__', '__mod__', '__rmod__', '__getitem__', '__add__', '__mul__', '__rmul__',
'capitalize', 'casefold', 'center', 'encode',
'expandtabs', 'format', 'format_map', 'join', 'ljust', 'lower', 'lstrip', 'replace',
'rjust', 'rstrip', 'strip', 'swapcase', 'title', 'translate', 'upper']:
fun = getattr(str, name)
setattr(tstr, name, make_str_wrapper(fun))
informationflow_init_1()
INITIALIZER_LIST = [informationflow_init_1]
def initialize():
for fn in INITIALIZER_LIST:
fn()
###Output
_____no_output_____
###Markdown
The one missing operator is `+` with a regular string on the left side and a tainted string on the right side. Python supports a `__radd__()` method which is invoked if the associated object is used on the right side of an addition.
###Code
class tstr(tstr):
def __radd__(self, s):
return self.create(s + str(self))
###Output
_____no_output_____
###Markdown
With this, we are already done. Let us create a string `thello` with a taint `LOW`.
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any substring will also be tainted:
###Code
thello[0].taint
thello[1:3].taint
###Output
_____no_output_____
###Markdown
String additions will return a `tstr` object with the taint:
###Code
(tstr('foo', taint='HIGH') + 'bar').taint
###Output
_____no_output_____
###Markdown
Our `__radd__()` method ensures this also works if the `tstr` occurs on the right side of a string addition:
###Code
('foo' + tstr('bar', taint='HIGH')).taint
thello += ', world'
thello.taint
###Output
_____no_output_____
###Markdown
Other operators such as multiplication also work:
###Code
(thello * 5).taint
('hw %s' % thello).taint
(tstr('hello %s', taint='HIGH') % 'world').taint
import string
###Output
_____no_output_____
###Markdown
Tracking Untrusted InputSo, what can one do with tainted strings? We reconsider the `DB` example. We define a "better" `TrustedDB` which only accepts strings tainted as `"TRUSTED"`.
###Code
class TrustedDB(DB):
def sql(self, s):
assert isinstance(s, tstr), "Need a tainted string"
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
return super().sql(s)
###Output
_____no_output_____
###Markdown
Feeding a string with an "unknown" (i.e., non-existing) trust level will cause `TrustedDB` to fail:
###Code
bdb = TrustedDB(db.db)
from ExpectError import ExpectError
with ExpectError():
bdb.sql("select year from INVENTORY")
###Output
Traceback (most recent call last):
File "<ipython-input-1-65a521f9999f>", line 2, in <module>
bdb.sql("select year from INVENTORY")
File "<ipython-input-1-53a654b6cc10>", line 3, in sql
assert isinstance(s, tstr), "Need a tainted string"
AssertionError: Need a tainted string (expected)
###Markdown
Additionally any user input would be originally tagged with `"UNTRUSTED"` as taint. If we place an untrusted string into our better calculator, it will also fail:
###Code
bad_user_input = tstr('__import__("os").popen("ls").read()', taint='UNTRUSTED')
with ExpectError():
bdb.sql(bad_user_input)
###Output
Traceback (most recent call last):
File "<ipython-input-1-82c5b2d628ed>", line 3, in <module>
bdb.sql(bad_user_input)
File "<ipython-input-1-53a654b6cc10>", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
Hence, somewhere along the computation, we have to turn the "untrusted" inputs into "trusted" strings. This process is called *sanitization*. A simple sanitization function for our purposes could ensure that the input consists only of few allowed characters (not including letters or quotes); if this is the case, then the input gets a new `"TRUSTED"` taint. If not, we turn the string into an (untrusted) empty string; other alternatives would be to raise an error or to escape or delete "untrusted" characters.
###Code
import re
def sanitize(user_input):
assert isinstance(user_input, tstr)
if re.match(
r'^select +[-a-zA-Z0-9_, ()]+ from +[-a-zA-Z0-9_, ()]+$', user_input):
return tstr(user_input, taint='TRUSTED')
else:
return tstr('', taint='UNTRUSTED')
good_user_input = tstr("select year,model from inventory", taint='UNTRUSTED')
sanitized_input = sanitize(good_user_input)
sanitized_input
sanitized_input.taint
bdb.sql(sanitized_input)
###Output
_____no_output_____
###Markdown
Let us now try out our untrusted input:
###Code
sanitized_input = sanitize(bad_user_input)
sanitized_input
sanitized_input.taint
with ExpectError():
bdb.sql(sanitized_input)
###Output
Traceback (most recent call last):
File "<ipython-input-1-e59f9e5c9d30>", line 2, in <module>
bdb.sql(sanitized_input)
File "<ipython-input-1-53a654b6cc10>", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
In a similar fashion, we can prevent SQL and code injections discussed in [the chapter on Web fuzzing](WebFuzzer.ipynb). Taint Aware FuzzingWe can also use tainting to _direct fuzzing to those grammar rules that are likely to generate dangerous inputs._ The idea here is to identify inputs generated by our fuzzer that lead to untrusted execution. First we define the exception to be thrown when a tainted value reaches a dangerous operation.
###Code
class Tainted(Exception):
def __init__(self, v):
self.v = v
def __str__(self):
return 'Tainted[%s]' % self.v
###Output
_____no_output_____
###Markdown
TaintedDBNext, since `my_eval()` is the most dangerous operation in the `DB` class, we define a new class `TaintedDB` that overrides the `my_eval()` to throw an exception whenever an untrusted string reaches this part.
###Code
class TaintedDB(DB):
def my_eval(self, statement, g, l):
if statement.taint != 'TRUSTED':
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
We initialize an instance of `TaintedDB`
###Code
tdb = TaintedDB()
tdb.db = db.db
###Output
_____no_output_____
###Markdown
Then we start fuzzing.
###Code
import traceback
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = tdb.sql(tstr(query, taint='UNTRUSTED'))
print(repr(res))
except SQLException as e:
pass
except Tainted as e:
print("> ", e)
except:
traceback.print_exc()
break
print()
###Output
'delete from inventory where y/u-l+f/y<Y(c)/A-H*q'
> Tainted[y/u-l+f/y<Y(c)/A-H*q]
"insert into inventory (G,Wmp,sl3hku3) values ('<','?')"
"insert into inventory (d0) values (',_G')"
'select P*Q-w/x from inventory where X<j==:==j*r-f'
> Tainted[(X<j==:==j*r-f)]
'select a>F*i from inventory where Q/I-_+P*j>.'
> Tainted[(Q/I-_+P*j>.)]
'select (V-i<T/g) from inventory where T/r/G<FK(m)/(i)'
> Tainted[(T/r/G<FK(m)/(i))]
'select (((i))),_(S,_)/L-k<H(Sv,R,n,W,Y) from inventory'
> Tainted[((((i))),_(S,_)/L-k<H(Sv,R,n,W,Y))]
'select (N==c*U/P/y),i-e/n*y,T!=w,u from inventory'
> Tainted[((N==c*U/P/y),i-e/n*y,T!=w,u)]
'update inventory set _=B,n=v where o-p*k-J>T'
'select s from inventory where w4g4<.m(_)/_>t'
> Tainted[(w4g4<.m(_)/_>t)]
###Markdown
One can see that `insert`, `update`, `select` and `delete` statements on an existing table lead to taint exceptions. We can now focus on these specific kinds of inputs. However, this is not the only thing we can do. We will see how we can identify specific portions of input that reached tainted execution using character origins in the later sections. But before that, we explore other uses of taints. Preventing Privacy LeaksUsing taints, we can also ensure that secret information does not leak out. We can assign a special taint `"SECRET"` to strings whose information must not leak out:
###Code
secrets = tstr('<Plenty of secret keys>', taint='SECRET')
###Output
_____no_output_____
###Markdown
Accessing any substring of `secrets` will propagate the taint:
###Code
secrets[1:3].taint
###Output
_____no_output_____
###Markdown
Consider the _heartbeat_ security leak from [the chapter on Fuzzing](Fuzzer.ipynb), in which a server would accidentally reply not only the user input sent to it, but also secret memory. If the reply consists only of the user input, there is no taint associated with it:
###Code
user_input = "hello"
reply = user_input
isinstance(reply, tstr)
###Output
_____no_output_____
###Markdown
If, however, the reply contains _any_ part of the secret, the reply will be tainted:
###Code
reply = user_input + secrets[0:5]
reply
reply.taint
###Output
_____no_output_____
###Markdown
The output function of our server would now ensure that the data sent back does not contain any secret information:
###Code
def send_back(s):
assert not isinstance(s, tstr) and not s.taint == 'SECRET'
...
with ExpectError():
send_back(reply)
###Output
Traceback (most recent call last):
File "<ipython-input-1-e02d8e55c3ba>", line 2, in <module>
send_back(reply)
File "<ipython-input-1-a105f7cd1cab>", line 2, in send_back
assert not isinstance(s, tstr) and not s.taint == 'SECRET'
AssertionError (expected)
###Markdown
Tracking Character OriginsOur `tstr` solution can help to identify information leaks – but it is by no means complete. If we actually take the `heartbeat()` implementation from [the chapter on Fuzzing](Fuzzer.ipynb), we will see that _any_ reply is marked as `SECRET` – even those not even accessing secret memory:
###Code
from Fuzzer import heartbeat
reply = heartbeat('hello', 5, memory=secrets)
reply.taint
###Output
_____no_output_____
###Markdown
Why is this? If we look into the implementation of `heartbeat()`, we will see that it first builds a long string `memory` from the (non-secret) reply and the (secret) memory, before returning the first characters from `memory`.```python Store reply in memory memory = reply + memory[len(reply):]```At this point, the whole memory still is tainted as `SECRET`, _including_ the non-secret part from `reply`. We may be able to circumvent the issue by tagging the `reply` as `PUBLIC` – but then, this taint would be in conflict with the `SECRET` tag of `memory`. What happens if we compose a string from two differently tainted strings?
###Code
thilo = tstr("High", taint='HIGH') + tstr("Low", taint='LOW')
###Output
_____no_output_____
###Markdown
It turns out that in this case, the `__add__()` method takes precedence over the `__radd__()` method, which means that the right-hand `"Low"` string is treated as a regular (non-tainted) string.
###Code
thilo
thilo.taint
###Output
_____no_output_____
###Markdown
We could set up the `__add__()` and other methods with special handling for conflicting taints. However, the way this conflict should be resolved would be highly _application-dependent_:* If we use taints to indicate _privacy levels_, `SECRET` privacy should take precedence over `PUBLIC` privacy. Any combination of a `SECRET`-tainted string and a `PUBLIC`-tainted string thus should have a `SECRET` taint.* If we use taints to indicate _origins_ of information, an `UNTRUSTED` origin should take precedence over a `TRUSTED` origin. Any combination of an `UNTRUSTED`-tainted string and a `TRUSTED`-tainted string thus should have an `UNTRUSTED` taint.Of course, such conflict resolutions can be implemented. But even so, they will not help us in the `heartbeat()` example differentiating secret from non-secret output data. Tracking Individual CharactersFortunately, there is a better, more generic way to solve the above problems. The key to composition of differently tainted strings is to assign taints not only to strings, but actually to every bit of information – in our case, characters. If every character has a taint on its own, a new composition of characters will simply inherit this very taint _per character_. To this end, we introduce a second bit of information named _origin_. Distinguishing various untrusted sources may be accomplished by origining each instance as separate instance (called *colors* in dynamic origin research). You will see an instance of this technique in the chapter on [Grammar Mining](GrammarMiner.ipynb). In this section, we carry *character level* origins. That is, given a fragment that resulted from a portion of the original origined string, one will be able to tell which portion of the input string the fragment was taken from. In essence, each input character index from an origined source gets its own color. More complex origining such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter. A Class for Tracking Character OriginsLet us introduce a class `ostr` which, like `tstr`, carries a taint for each string, and additionally an _origin_ for each character that indicates its source. It is a consecutive number in a particular range (by default, starting with zero) indicating its _position_ within a specific origin.
###Code
class ostr(str):
DEFAULT_ORIGIN = 0
def __new__(cls, value, *args, **kw):
return str.__new__(cls, value)
def __init__(self, value, taint=None, origin=None, **kwargs):
self.taint = taint
if origin is None:
origin = ostr.DEFAULT_ORIGIN
if isinstance(origin, int):
self.origin = list(range(origin, origin + len(self)))
else:
self.origin = origin
assert len(self.origin) == len(self)
class ostr(ostr):
def create(self, s):
return ostr(s, taint=self.taint, origin=self.origin)
class ostr(ostr):
UNKNOWN_ORIGIN = -1
def __repr__(self):
# handle escaped chars
origin = [ostr.UNKNOWN_ORIGIN]
for s, o in zip(str(self), self.origin):
origin.extend([o] * (len(repr(s)) - 2))
origin.append(ostr.UNKNOWN_ORIGIN)
return ostr(str.__repr__(self), taint=self.taint, origin=origin)
class ostr(ostr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
By default, character origins start with `0`:
###Code
thello = ostr('hello')
assert thello.origin == [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
We can also specify the starting origin as below -- `6..10`
###Code
tworld = ostr('world', origin=6)
assert tworld.origin == [6, 7, 8, 9, 10]
a = ostr("hello\tworld")
repr(a).origin
###Output
_____no_output_____
###Markdown
`str()` returns a `str` instance without origin or taint information:
###Code
assert type(str(thello)) == str
###Output
_____no_output_____
###Markdown
`repr()`, however, keeps the origin information for the original string:
###Code
repr(thello)
repr(thello).origin
###Output
_____no_output_____
###Markdown
Just as with taints, we can clear origins and check whether an origin is present:
###Code
class ostr(ostr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
class ostr(ostr):
def clear_origin(self):
self.origin = [self.UNKNOWN_ORIGIN] * len(self)
return self
def has_origin(self):
return any(origin != self.UNKNOWN_ORIGIN for origin in self.origin)
thello = ostr('Hello')
assert thello.has_origin()
thello.clear_origin()
assert not thello.has_origin()
###Output
_____no_output_____
###Markdown
In the remainder of this section, we re-implement various string methods such that they also keep track of origins. If this is too tedious for you, jump right [to the next section](Checking-Origins) which gives a number of usage examples. CreateWe need to create new substrings that are wrapped in `ostr` objects. However, we also want to allow our subclasses to create their own instances. Hence we again provide a `create()` method that produces a new `ostr` instance.
###Code
class ostr(ostr):
def create(self, res, origin=None):
return ostr(res, taint=self.taint, origin=origin)
thello = ostr('hello', taint='HIGH')
tworld = thello.create('world', origin=6)
tworld.origin
tworld.taint
assert (thello.origin, tworld.origin) == (
[0, 1, 2, 3, 4], [6, 7, 8, 9, 10])
###Output
_____no_output_____
###Markdown
IndexIn Python, indexing is provided through `__getitem__()`. Indexing on positive integers is simple enough. However, it has two additional wrinkles. The first is that, if the index is negative, that many characters are counted from the end of the string which lies just after the last character. That is, the last character has a negative index `-1`
###Code
class ostr(ostr):
def __getitem__(self, key):
res = super().__getitem__(key)
if isinstance(key, int):
key = len(self) + key if key < 0 else key
return self.create(res, [self.origin[key]])
elif isinstance(key, slice):
return self.create(res, self.origin[key])
else:
assert False
hello = ostr('hello', taint='HIGH')
assert (hello[0], hello[-1]) == ('h', 'o')
hello[0].taint
###Output
_____no_output_____
###Markdown
The other wrinkle is that `__getitem__()` can accept a slice. We discuss this next. SlicesThe Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method, which returns a custom `iterator`.
###Code
class ostr(ostr):
def __iter__(self):
return ostr_iterator(self)
###Output
_____no_output_____
###Markdown
The `__iter__()` method requires a supporting `iterator` object. The `iterator` is used to save the state of the current iteration, which it does by keeping a reference to the original `ostr`, and the current index of iteration `_str_idx`.
###Code
class ostr_iterator():
def __init__(self, ostr):
self._ostr = ostr
self._str_idx = 0
def __next__(self):
if self._str_idx == len(self._ostr):
raise StopIteration
# calls ostr getitem should be ostr
c = self._ostr[self._str_idx]
assert isinstance(c, ostr)
self._str_idx += 1
return c
###Output
_____no_output_____
###Markdown
Bringing all these together:
###Code
thw = ostr('hello world', taint='HIGH')
thw[0:5]
assert thw[0:5].has_taint()
assert thw[0:5].has_origin()
thw[0:5].taint
thw[0:5].origin
###Output
_____no_output_____
###Markdown
Splits
###Code
def make_split_wrapper(fun):
def proxy(self, *args, **kwargs):
lst = fun(self, *args, **kwargs)
return [self.create(elem) for elem in lst]
return proxy
for name in ['split', 'rsplit', 'splitlines']:
fun = getattr(str, name)
setattr(ostr, name, make_split_wrapper(fun))
thello = ostr('hello world', taint='LOW')
thello == 'hello world'
thello.split()[0].taint
###Output
_____no_output_____
###Markdown
(Exercise for the reader: handle _partitions_, i.e., splitting a string by substrings) ConcatenationIf two origined strings are concatenated together, it may be desirable to transfer the origins from each to the corresponding portion of the resulting string. The concatenation of strings is accomplished by overriding `__add__()`.
###Code
class ostr(ostr):
def __add__(self, other):
if isinstance(other, ostr):
return self.create(str.__add__(self, other),
(self.origin + other.origin))
else:
return self.create(str.__add__(self, other),
(self.origin + [self.UNKNOWN_ORIGIN for i in other]))
###Output
_____no_output_____
###Markdown
Testing concatenations between two `ostr` instances:
###Code
thello = ostr("hello")
tworld = ostr("world", origin=6)
thw = thello + tworld
assert thw.origin == [0, 1, 2, 3, 4, 6, 7, 8, 9, 10]
###Output
_____no_output_____
###Markdown
What if a `ostr` is concatenated with a `str`?
###Code
space = " "
th_w = thello + space + tworld
assert th_w.origin == [
0,
1,
2,
3,
4,
ostr.UNKNOWN_ORIGIN,
ostr.UNKNOWN_ORIGIN,
6,
7,
8,
9,
10]
###Output
_____no_output_____
###Markdown
One wrinkle here is that when adding a `ostr` and a `str`, the user may place the `str` first, in which case, the `__add__()` method will be called on the `str` instance. Not on the `ostr` instance. However, Python provides a solution. If one defines `__radd__()` on the `ostr` instance, that method will be called rather than `str.__add__()`
###Code
class ostr(ostr):
def __radd__(self, other):
origin = other.origin if isinstance(other, ostr) else [
self.UNKNOWN_ORIGIN for i in other]
return self.create(str.__add__(other, self), (origin + self.origin))
###Output
_____no_output_____
###Markdown
We test it out:
###Code
shello = "hello"
tworld = ostr("world")
thw = shello + tworld
assert thw.origin == [ostr.UNKNOWN_ORIGIN] * len(shello) + [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
These methods: `slicing` and `concatenation` is sufficient to implement other string methods that result in a string, and does not change the character underneath (i.e no case change). Hence, we look at a helper method next. Extract Origin StringGiven a specific input index, the method `x()` extracts the corresponding origined portion from a `ostr`. As a convenience it supports `slices` along with `ints`.
###Code
class ostr(ostr):
class TaintException(Exception):
pass
def x(self, i=0):
if not self.origin:
raise origin.TaintException('Invalid request idx')
if isinstance(i, int):
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j == i]]
elif isinstance(i, slice):
r = range(i.start or 0, i.stop or len(self), i.step or 1)
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j in r]]
thw = ostr('hello world', origin=100)
assert thw.x(101) == ['e']
assert thw.x(slice(101, 105)) == ['e', 'l', 'l', 'o']
###Output
_____no_output_____
###Markdown
Replace The `replace()` method replaces a portion of the string with another.
###Code
class ostr(ostr):
def replace(self, a, b, n=None):
old_origin = self.origin
b_origin = b.origin if isinstance(
b, ostr) else [self.UNKNOWN_ORIGIN] * len(b)
mystr = str(self)
i = 0
while True:
if n and i >= n:
break
idx = mystr.find(a)
if idx == -1:
break
last = idx + len(a)
mystr = mystr.replace(a, b, 1)
partA, partB = old_origin[0:idx], old_origin[last:]
old_origin = partA + b_origin + partB
i += 1
return self.create(mystr, old_origin)
my_str = ostr("aa cde aa")
res = my_str.replace('aa', 'bb')
assert res, res.origin == ('bb', 'cde', 'bb',
[self.UNKNOWN_ORIGIN, self.UNKNOWN_ORIGIN,
2, 3, 4, 5, 6,
self.UNKNOWN_ORIGIN, self.UNKNOWN_ORIGIN])
my_str = ostr("aa cde aa")
res = my_str.replace('aa', ostr('bb', origin=100))
assert (
res, res.origin) == (
('bb cde bb'), [
100, 101, 2, 3, 4, 5, 6, 100, 101])
###Output
_____no_output_____
###Markdown
Split We essentially have to re-implement split operations, and split by space is slightly different from other splits.
###Code
class ostr(ostr):
def _split_helper(self, sep, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = len(sep)
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
first_idx = last_idx + sep_len
return result_list
def _split_space(self, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = 0
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
v = str(self[last_idx:])
sep_len = len(v) - len(v.lstrip(' '))
first_idx = last_idx + sep_len
return result_list
def rsplit(self, sep=None, maxsplit=-1):
splitted = super().rsplit(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
def split(self, sep=None, maxsplit=-1):
splitted = super().split(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
my_str = ostr('ab cdef ghij kl')
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([0, 1], [3, 4, 5, 6], [8, 9, 10, 11], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 15)))
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert(ab.origin, cdef.origin, kl.origin) == ([0, 1], [3, 4, 5, 6], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=100, taint='HIGH')
ab, cdef, ghij, kl = my_str.rsplit()
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([100, 101], [105, 106, 107, 108], [110, 111, 112, 113],
[118, 119])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 20)), taint='HIGH')
ab, cdef, ghij, kl = my_str.split()
assert (ab.origin, cdef.origin, kl.origin) == ([0, 1], [5, 6, 7, 8], [18, 19])
assert ab.taint == 'HIGH'
###Output
_____no_output_____
###Markdown
Strip
###Code
class ostr(ostr):
def strip(self, cl=None):
return self.lstrip(cl).rstrip(cl)
def lstrip(self, cl=None):
res = super().lstrip(cl)
i = self.find(res)
return self[i:]
def rstrip(self, cl=None):
res = super().rstrip(cl)
return self[0:len(res)]
my_str1 = ostr(" abc ")
v = my_str1.strip()
assert v, v.origin == ('abc', [2, 3, 4])
my_str1 = ostr(" abc ")
v = my_str1.lstrip()
assert (v, v.origin) == ('abc ', [2, 3, 4, 5, 6])
my_str1 = ostr(" abc ")
v = my_str1.rstrip()
assert (v, v.origin) == (' abc', [0, 1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
Expand Tabs
###Code
class ostr(ostr):
def expandtabs(self, n=8):
parts = self.split('\t')
res = super().expandtabs(n)
all_parts = []
for i, p in enumerate(parts):
all_parts.extend(p.origin)
if i < len(parts) - 1:
l = len(all_parts) % n
all_parts.extend([p.origin[-1]] * l)
return self.create(res, all_parts)
my_str = str("ab\tcd")
my_ostr = ostr("ab\tcd")
v1 = my_str.expandtabs(4)
v2 = my_ostr.expandtabs(4)
assert str(v1) == str(v2)
assert (len(v1), repr(v2), v2.origin) == (6, "'ab cd'", [0, 1, 1, 1, 3, 4])
class ostr(ostr):
def join(self, iterable):
mystr = ''
myorigin = []
sep_origin = self.origin
lst = list(iterable)
for i, s in enumerate(lst):
sorigin = s.origin if isinstance(s, ostr) else [
self.UNKNOWN_ORIGIN] * len(s)
myorigin.extend(sorigin)
mystr += str(s)
if i < len(lst) - 1:
myorigin.extend(sep_origin)
mystr += str(self)
res = super().join(iterable)
assert len(res) == len(mystr)
return self.create(res, myorigin)
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104])
v4 = ostr('').join([v2, v3, v1])
assert (
v4, v4.origin) == (
'cdefab', [
103, 104, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 100, 101])
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104])
v4 = ostr(',').join([v2, v3, v1])
assert (v4, v4.origin) == ('cd,ef,ab',
[103, 104, 0, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 0, 100, 101])
###Output
_____no_output_____
###Markdown
Partitions
###Code
class ostr(ostr):
def partition(self, sep):
partA, sep, partB = super().partition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
def rpartition(self, sep):
partA, sep, partB = super().rpartition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
###Output
_____no_output_____
###Markdown
Justify
###Code
class ostr(ostr):
def ljust(self, width, fillchar=' '):
res = super().ljust(width, fillchar)
initial = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, [t] * initial + self.origin)
class ostr(ostr):
def rjust(self, width, fillchar=' '):
res = super().rjust(width, fillchar)
final = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, self.origin + [t] * final)
###Output
_____no_output_____
###Markdown
mod
###Code
class ostr(ostr):
def __mod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
s_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = self.find('%s')
assert i >= 0
res = super().__mod__(s)
r_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
class ostr(ostr):
def __rmod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
r_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = s.find('%s')
assert i >= 0
res = super().__rmod__(s)
s_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
a = ostr('hello %s world', origin=100)
a
(a % 'good').origin
b = 'hello %s world'
c = ostr('bad', origin=10)
(b % c).origin
###Output
_____no_output_____
###Markdown
String methods that do not change origin
###Code
class ostr(ostr):
def swapcase(self):
return self.create(str(self).swapcase(), self.origin)
def upper(self):
return self.create(str(self).upper(), self.origin)
def lower(self):
return self.create(str(self).lower(), self.origin)
def capitalize(self):
return self.create(str(self).capitalize(), self.origin)
def title(self):
return self.create(str(self).title(), self.origin)
a = ostr('aa', origin=100).upper()
a, a.origin
###Output
_____no_output_____
###Markdown
General wrappers These are not strictly needed for operation, but can be useful for tracing.
###Code
def make_str_wrapper(fun):
def proxy(*args, **kwargs):
res = fun(*args, **kwargs)
return res
return proxy
import inspect
import types
def informationflow_init_2():
ostr_members = [name for name, fn in inspect.getmembers(ostr, callable)
if isinstance(fn, types.FunctionType) and fn.__qualname__.startswith('ostr')]
for name, fn in inspect.getmembers(str, callable):
if name not in set(['__class__', '__new__', '__str__', '__init__',
'__repr__', '__getattribute__']) | set(ostr_members):
setattr(ostr, name, make_str_wrapper(fn))
informationflow_init_2()
INITIALIZER_LIST.append(informationflow_init_2)
###Output
_____no_output_____
###Markdown
Methods yet to be translated These methods generate strings from other strings. However, we do not have the right implementations for any of these. Hence these are marked as dangerous until we can generate the right translations.
###Code
def make_str_abort_wrapper(fun):
def proxy(*args, **kwargs):
raise ostr.TaintException(
'%s Not implemented in `ostr`' %
fun.__name__)
return proxy
def informationflow_init_3():
for name, fn in inspect.getmembers(str, callable):
# Omitted 'splitlines' as this is needed for formatting output in
# IPython/Jupyter
if name in ['__format__', 'format_map', 'format',
'__mul__', '__rmul__', 'center', 'zfill', 'decode', 'encode']:
setattr(ostr, name, make_str_abort_wrapper(fn))
informationflow_init_3()
INITIALIZER_LIST.append(informationflow_init_3)
###Output
_____no_output_____
###Markdown
While generating proxy wrappers for string operations can handle most common cases of transmission of information flow, some of the operations involving strings can not be overridden. For example, consider the following. Checking OriginsWith all this implemented, we now have full-fledged `ostr` strings where we can easily check the origin of each and every character. To check whether a string originates from another string, we can convert the origin to a set and resort to standard set operations:
###Code
s = ostr("hello", origin=100)
s[1]
s[1].origin
set(s[1].origin) <= set(s.origin)
t = ostr("world", origin=200)
set(s.origin) <= set(t.origin)
u = s + t + "!"
u.origin
ostr.UNKNOWN_ORIGIN in u.origin
###Output
_____no_output_____
###Markdown
Privacy Leaks RevisitedLet us apply it to see whether we can come up with a satisfactory solution for checking the `heartbeat()` function against information leakage.
###Code
SECRET_ORIGIN = 1000
###Output
_____no_output_____
###Markdown
We define a "secret" that must not leak out:
###Code
secret = ostr('<again, some super-secret input>', origin=SECRET_ORIGIN)
###Output
_____no_output_____
###Markdown
Each and every character in `secret` has an origin starting with `SECRET_ORIGIN`:
###Code
print(secret.origin)
###Output
[1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031]
###Markdown
If we now invoke `heartbeat()` with a given string, the origin of the reply should all be `UNKNOWN_ORIGIN` (from the input), and none of the characters should have a `SECRET_ORIGIN`.
###Code
s = heartbeat('hello', 5, memory=secret)
s
print(s.origin)
###Output
[-1, -1, -1, -1, -1]
###Markdown
We can verify that the secret did not leak out by formulating appropriate assertions:
###Code
assert s.origin == [ostr.UNKNOWN_ORIGIN] * len(s)
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in s.origin)
assert not any(origin >= SECRET_ORIGIN for origin in s.origin)
###Output
_____no_output_____
###Markdown
All assertions pass, again confirming that no secret leaked out. Let us now go and exploit `heartbeat()` to reveal its secrets. As `heartbeat()` is unchanged, it is as vulnerable as it was:
###Code
s = heartbeat('hello', 32, memory=secret)
s
###Output
_____no_output_____
###Markdown
Now, however, the reply _does_ contain secret information:
###Code
print(s.origin)
with ExpectError():
assert s.origin == [ostr.UNKNOWN_ORIGIN] * len(s)
with ExpectError():
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in s.origin)
with ExpectError():
assert not any(origin >= SECRET_ORIGIN for origin in s.origin)
###Output
Traceback (most recent call last):
File "<ipython-input-1-9630f3080c59>", line 2, in <module>
assert not any(origin >= SECRET_ORIGIN for origin in s.origin)
AssertionError (expected)
###Markdown
We can now integrate these assertions into the `heartbeat()` function, causing it to fail before leaking information. Additionally (or alternatively?), we can also rewrite our output functions not to give out any secret information. We will leave these two exercises for the reader. Taint-Directed FuzzingThe previous _Taint Aware Fuzzing_ was a bit unsatisfactory in that we could not focus on the specific parts of the grammar that led to dangerous operations. We fix that with _taint directed fuzzing_ using `TrackingDB`.The idea here is to track the origins of each character that reaches `eval`. Then, track it back to the grammar nodes that generated it, and increase the probability of using those nodes again. TrackingDBThe `TrackingDB` is similar to `TaintedDB`. The difference is that, if we find that the execution has reached the `my_eval`, we simply raise the `Tainted`.
###Code
class TrackingDB(TaintedDB):
def my_eval(self, statement, g, l):
if statement.origin:
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
Next, we need a specially crafted fuzzer that preserves the taints. TaintedGrammarFuzzerWe define a `TaintedGrammarFuzzer` class that ensures that the taints propagate to the derivation tree. This is similar to the `GrammarFuzzer` from the [chapter on grammar fuzzers](GrammarFuzzer.ipynb) except that the origins and taints are preserved.
###Code
import random
from Grammars import START_SYMBOL
from GrammarFuzzer import GrammarFuzzer
from Parser import canonical
class TaintedGrammarFuzzer(GrammarFuzzer):
def __init__(self,
grammar,
start_symbol=START_SYMBOL,
expansion_switch=1,
log=False):
self.tainted_start_symbol = ostr(
start_symbol, origin=[1] * len(start_symbol))
self.expansion_switch = expansion_switch
self.log = log
self.grammar = grammar
self.c_grammar = canonical(grammar)
self.init_tainted_grammar()
def expansion_cost(self, expansion, seen=set()):
symbols = [e for e in expansion if e in self.c_grammar]
if len(symbols) == 0:
return 1
if any(s in seen for s in symbols):
return float('inf')
return sum(self.symbol_cost(s, seen) for s in symbols) + 1
def fuzz_tree(self):
tree = (self.tainted_start_symbol, [])
nt_leaves = [tree]
expansion_trials = 0
while nt_leaves:
idx = random.randint(0, len(nt_leaves) - 1)
key, children = nt_leaves[idx]
expansions = self.ct_grammar[key]
if expansion_trials < self.expansion_switch:
expansion = random.choice(expansions)
else:
costs = [self.expansion_cost(e) for e in expansions]
m = min(costs)
all_min = [i for i, c in enumerate(costs) if c == m]
expansion = expansions[random.choice(all_min)]
new_leaves = [(token, []) for token in expansion]
new_nt_leaves = [e for e in new_leaves if e[0] in self.ct_grammar]
children[:] = new_leaves
nt_leaves[idx:idx + 1] = new_nt_leaves
if self.log:
print("%-40s" % (key + " -> " + str(expansion)))
expansion_trials += 1
return tree
def fuzz(self):
self.derivation_tree = self.fuzz_tree()
return self.tree_to_string(self.derivation_tree)
###Output
_____no_output_____
###Markdown
We use a specially prepared tainted grammar for fuzzing. We mark each individual definition, each individual rule, and each individual token with a separate origin (we chose a token boundary of 10 here, after inspecting the grammar). This allows us to track exactly which parts of the grammar were involved in the operations we are interested in.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def init_tainted_grammar(self):
key_increment, alt_increment, token_increment = 1000, 100, 10
key_origin = key_increment
self.ct_grammar = {}
for key, val in self.c_grammar.items():
key_origin += key_increment
os = []
for v in val:
ts = []
key_origin += alt_increment
for t in v:
nt = ostr(t, origin=key_origin)
key_origin += token_increment
ts.append(nt)
os.append(ts)
self.ct_grammar[key] = os
# a use tracking grammar
self.ctp_grammar = {}
for key, val in self.ct_grammar.items():
self.ctp_grammar[key] = [(v, dict(use=0)) for v in val]
###Output
_____no_output_____
###Markdown
As before, we initialize the `TrackingDB`
###Code
trdb = TrackingDB(db.db)
###Output
_____no_output_____
###Markdown
Finally, we need to ensure that the taints are preserved, when the tree is converted back to a string. For this, we define the `tainted_tree_to_string()`
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def tree_to_string(self, tree):
symbol, children, *_ = tree
e = ostr('')
if children:
return e.join([self.tree_to_string(c) for c in children])
else:
return e if symbol in self.c_grammar else symbol
###Output
_____no_output_____
###Markdown
We define `update_grammar()` that accepts a set of origins that reached the dangerous operations and the derivation tree of the original string used for fuzzing to update the enhanced grammar.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def update_grammar(self, origin, dtree):
def update_tree(dtree, origin):
key, children = dtree
if children:
updated_children = [update_tree(c, origin) for c in children]
corigin = set.union(
*[o for (key, children, o) in updated_children])
corigin = corigin.union(set(key.origin))
return (key, children, corigin)
else:
my_origin = set(key.origin).intersection(origin)
return (key, [], my_origin)
key, children, oset = update_tree(dtree, set(origin))
for key, alts in self.ctp_grammar.items():
for alt, o in alts:
alt_origins = set([i for token in alt for i in token.origin])
if alt_origins.intersection(oset):
o['use'] += 1
###Output
_____no_output_____
###Markdown
With these, we are now ready to fuzz.
###Code
def tree_type(tree):
key, children = tree
return (type(key), key, [tree_type(c) for c in children])
tgf = TaintedGrammarFuzzer(INVENTORY_GRAMMAR_F)
x = None
for _ in range(10):
qtree = tgf.fuzz_tree()
query = tgf.tree_to_string(qtree)
assert isinstance(query, ostr)
try:
print(repr(query))
res = trdb.sql(query)
print(repr(res))
except SQLException as e:
print(e)
except Tainted as e:
print(e)
origin = e.args[0].origin
tgf.update_grammar(origin, qtree)
except:
traceback.print_exc()
break
print()
###Output
'select (g!=(9)!=((:)==2==9)!=J)==-7 from inventory'
Tainted[((g!=(9)!=((:)==2==9)!=J)==-7)]
'delete from inventory where ((c)==T)!=5==(8!=Y)!=-5'
Tainted[((c)==T)!=5==(8!=Y)!=-5]
'select (((w==(((X!=------8)))))) from inventory'
Tainted[((((w==(((X!=------8)))))))]
'delete from inventory where ((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))'
Tainted[((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))]
'delete from inventory where ((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))'
Tainted[((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))]
'delete from inventory where o!=2==---5==3!=t'
Tainted[o!=2==---5==3!=t]
'select (2) from inventory'
Tainted[((2))]
'select _ from inventory'
Tainted[(_)]
'select L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)) from inventory'
Tainted[(L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)))]
'delete from inventory where _==(7==(9)!=(---5)==1)==-8'
Tainted[_==(7==(9)!=(---5)==1)==-8]
###Markdown
We can now inspect our enhanced grammar to see how many times each rule was used.
###Code
tgf.ctp_grammar
###Output
_____no_output_____
###Markdown
From here, the idea is to focus on the rules that reached dangerous operations more often, and increase the probability of the values of that kind. The Limits of Taint TrackingWhile our framework can detect information leakage, it is by no means perfect. There are several ways in which taints can get lost and information thus may still leak out. ConversionsWe only track taints and origins through _strings_ and _characters_. If we convert these to numbers (or other data), the information is lost. As an example, consider this function, converting individual characters to numbers and back:
###Code
def strip_all_info(s):
t = ""
for c in s:
t += chr(ord(c))
return t
thello = ostr("Secret")
thello
thello.origin
###Output
_____no_output_____
###Markdown
The taints and origins will not propagate through the number conversion:
###Code
thello_stripped = strip_all_info(thello)
thello_stripped
with ExpectError():
thello_stripped.origin
###Output
Traceback (most recent call last):
File "<ipython-input-1-56d5157cf575>", line 2, in <module>
thello_stripped.origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
This issue could be addressed by extending numbers with taints and origins, just as we did for strings. At some point, however, this will still break down, because as soon as an internal C function in the Python library is reached, the taint will not propagate into and across the C function. (Unless one starts implementing dynamic taints for these, that is.) Internal C libraries As we mentioned before, calls to _internal_ C libraries do not propagate taints. For example, while the following preserves the taints,
###Code
hello = ostr('hello', origin=100)
world = ostr('world', origin=200)
(hello + ' ' + world).origin
###Output
_____no_output_____
###Markdown
a call to a `join` that should be equivalent will fail.
###Code
with ExpectError():
''.join([hello, ' ', world]).origin
###Output
Traceback (most recent call last):
File "<ipython-input-1-ad148b54cc0b>", line 2, in <module>
''.join([hello, ' ', world]).origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
Implicit Information FlowEven if one could taint all data in a program, there still would be means to break information flow – notably by turning explicit flow into _implicit_ flow, or data flow into _control flow_. Here is an example:
###Code
def strip_all_info_again(s):
t = ""
for c in s:
if c == 'a':
t += 'a'
elif c == 'b':
t += 'b'
elif c == 'c':
t += 'c'
...
###Output
_____no_output_____
###Markdown
With such a function, there is no explicit data flow between the characters in `s` and the characters in `t`; yet, the strings would be identical. This problem frequently occurs in programs that process and manipulate external input. Enforcing TaintingBoth, conversions and implicit information flow are one of several possibilities how taint and origin information get lost. To address the problem, the best solution is to _always assume the worst from untainted strings_:* As it comes to trust, an untainted string should be treated as _possibly untrusted_, and hence not relied upon unless sanitized.* As it comes to privacy, an untainted string should be treated as _possibly secret_, and hence not leaked out.As a consequence, your program should always have two kinds of taints: one for explicitly trusted (or secret) and one for explicitly untrusted (or non-secret). If a taint gets lost along the way, you will may have to restore it from its sources – not unlike the string methods discussed above. The benefit is a trusted application, in which each and every information flow can be checked at runtime, with violations quickly discovered through automated tests. Synopsis This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. For tracking information on security properties, use `tstr` as follows:
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any operation from `thello` that results in a string fragment would include the correct taint. For example:
###Code
thello[1:2].taint
###Output
_____no_output_____
###Markdown
For tracking the originating indexes from the input string, use `ostr` as follows:
###Code
ohw = ostr("hello\tworld", origin=100)
###Output
_____no_output_____
###Markdown
The originating indexes can be recovered as follows:
###Code
(ohw[0:4] +"-"+ ohw[6:]).origin
###Output
_____no_output_____
###Markdown
Lessons Learned* String-based and character-based taints allow to dynamically track the information flow from input to the internals of a system and back to the output.* Checking taints allows to discover untrusted inputs and information leakage at runtime.* Data conversions and implicit data flow may strip taint information; the resulting untainted strings should be treated as having the worst possible taint.* Taints can be used in conjunction with fuzzing to provide a more robust indication of incorrect behavior than to simply rely on program crashes. Next StepsAn even better alternative to our taint-directed fuzzing is to make use of _symbolic_ techniques that take the semantics of the program under test into account. The chapter on [flow fuzzing](FlowFuzzer.ipynb) introduces these symbolic techniques for the purpose of exploring information flows; the subsequent chapter on [symbolic fuzzing](SymbolicFuzzer.ipynb) then shows how to make full-fledged use of symbolic execution for covering code. Similarly, [search based fuzzing](SearchBasedFuzzer.ipynb) can often provide a cheaper exploration strategy. BackgroundTaint analysis on Python using a library approach as we implemented in this chapter was discussed by Conti et al. \cite{Conti2010}. Exercises Exercise 1: Tainted NumbersIntroduce a class `tint` (for tainted integer) that, like `tstr`, has a taint attribute that gets passed on from `tint` to `tint`. Part 1: CreationImplement the `tint` class such that taints are set:```pythonx = tint(42, taint='SECRET')assert x.taint == 'SECRET'``` **Solution.** This is pretty straightforward, as we can apply the same scheme as for `tstr`:
###Code
class tint(int):
def __new__(cls, value, *args, **kw):
return int.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
x = tint(42, taint='SECRET')
assert x.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 2: Arithmetic expressionsEnsure that taints get passed along arithmetic expressions; support addition, subtraction, multiplication, and division operators.```pythony = x + 1assert y.taint == 'SECRET'``` **Solution.** As with `tstr`, we implement a `create()` method and a convenience function to quickly define all arithmetic operations:
###Code
class tint(tint):
def create(self, n):
# print("New tint from", n)
return tint(n, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_int_wrapper()` function creates a wrapper around an existing `int` method which attaches the taint to the result of the method:
###Code
def make_int_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
# print(fun, args, kwargs, "=", repr(res))
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all arithmetic operators:
###Code
for name in ['__add__', '__radd__', '__mul__', '__rmul__', '__sub__',
'__floordiv__', '__truediv__']:
fun = getattr(int, name)
setattr(tint, name, make_int_wrapper(fun))
x = tint(42, taint='SECRET')
y = x + 1
y.taint
###Output
_____no_output_____
###Markdown
Part 3: Passing taints from integers to stringsConverting a tainted integer into a string (using `repr()`) should yield a tainted string:```pythons = repr(x)assert s.taint == 'SECRET'``` **Solution.** We define the string conversion functions such that they return a tainted string (`tstr`):
###Code
class tint(tint):
def __repr__(self):
s = int.__repr__(self)
return tstr(s, taint=self.taint)
class tint(tint):
def __str__(self):
return tstr(int.__str__(self), taint=self.taint)
x = tint(42, taint='SECRET')
s = repr(x)
assert s.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 4: Passing taints from strings to integersConverting a tainted object (with a `taint` attribute) to an integer should pass that taint:```pythonpassword = tstr('1234', taint='NOT_EXACTLY_SECRET')x = tint(password)assert x == 1234assert x.taint == 'NOT_EXACTLY_SECRET'``` **Solution.** This can be done by having the `__init__()` constructor check for a `taint` attibute:
###Code
class tint(tint):
def __init__(self, value, taint=None, **kwargs):
if taint is not None:
self.taint = taint
else:
self.taint = getattr(value, 'taint', None)
password = tstr('1234', taint='NOT_EXACTLY_SECRET')
x = tint(password)
assert x == 1234
assert x.taint == 'NOT_EXACTLY_SECRET'
###Output
_____no_output_____
###Markdown
Tracking Information FlowWe have explored how one could generate better inputs that can penetrate deeper into the program in question. While doing so, we have relied on program crashes to tell us that we have succeeded in finding problems in the program. However, that is rather simplistic. What if the behavior of the program is simply incorrect, but does not lead to a crash? Can one do better?In this chapter, we explore in depth how to track information flows in Python, and how these flows can be used to determine whether a program behaved as expected.
###Code
from bookutils import YouTubeVideo
YouTubeVideo('MJ0VGzVbhYc')
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [chapter on coverage](Coverage.ipynb).* You should have read the [chapter on probabilistic fuzzing](ProbabilisticGrammarFuzzer.ipynb). We first set up our infrastructure so that we can make use of previously defined functions.
###Code
import bookutils
from typing import List, Any, Optional, Union
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.InformationFlow import ```and then make use of the following features.This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:```python>>> thello = tstr('hello', taint='LOW')```A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:```python>>> thello[:4]'hell'```However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:```python>>> thello.taint'LOW'```The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:```python>>> thello[1:2].taint type: ignore'LOW'````tstr` objects duplicate most `str` methods, as indicated in the class diagram: Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:```python>>> secret = ostr("joshua1234", origin=100, taint='SECRET')```The `origin` attribute of an `ostr` provides access to a list of indexes:```python>>> secret.origin[100, 101, 102, 103, 104, 105, 106, 107, 108, 109]>>> secret.taint'SECRET'````ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:```python>>> secret_substr = (secret[0:4] + "-" + secret[6:])>>> secret_substr.taint'SECRET'>>> secret_substr.origin[100, 101, 102, 103, -1, 106, 107, 108, 109]````ostr` objects duplicate most `str` methods, as indicated in the class diagram: A Vulnerable DatabaseSay we want to implement an *in-memory database* service in Python. Here is a rather flimsy attempt. We use the following dataset.
###Code
INVENTORY = """\
1997,van,Ford,E350
2000,car,Mercury,Cougar
1999,car,Chevy,Venture\
"""
VEHICLES = INVENTORY.split('\n')
###Output
_____no_output_____
###Markdown
Our DB is a Python class that parses its arguments and throws `SQLException` which is defined below.
###Code
class SQLException(Exception):
pass
###Output
_____no_output_____
###Markdown
The database is simply a Python `dict` that is exposed only through SQL queries.
###Code
class DB:
def __init__(self, db={}):
self.db = dict(db)
###Output
_____no_output_____
###Markdown
Representing TablesThe database contains tables, which are created by a method call `create_table()`. Each table data structure is a pair of values. The first one is the meta data containing column names and types. The second value is a list of values in the table.
###Code
class DB(DB):
def create_table(self, table, defs):
self.db[table] = (defs, [])
###Output
_____no_output_____
###Markdown
The table can be retrieved using the name using the `table()` method call.
###Code
class DB(DB):
def table(self, t_name):
if t_name in self.db:
return self.db[t_name]
raise SQLException('Table (%s) was not found' % repr(t_name))
###Output
_____no_output_____
###Markdown
Here is an example of how to use both. We fill a table `inventory` with four columns: `year`, `kind`, `company`, and `model`. Initially, our table is empty.
###Code
def sample_db():
db = DB()
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
return db
###Output
_____no_output_____
###Markdown
Using `table()`, we can retrieve the table definition as well as its contents.
###Code
db = sample_db()
db.table('inventory')
###Output
_____no_output_____
###Markdown
We also define `column()` for retrieving the column definition from a table declaration.
###Code
class DB(DB):
def column(self, table_decl, c_name):
if c_name in table_decl:
return table_decl[c_name]
raise SQLException('Column (%s) was not found' % repr(c_name))
db = sample_db()
decl, rows = db.table('inventory')
db.column(decl, 'year')
###Output
_____no_output_____
###Markdown
Executing SQL StatementsThe `sql()` method of `DB` executes SQL statements. It inspects its arguments, and dispatches the query based on the kind of SQL statement to be executed.
###Code
class DB(DB):
def do_select(self, query):
...
def do_update(self, query):
...
def do_insert(self, query):
...
def do_delete(self, query):
...
def sql(self, query):
methods = [('select ', self.do_select),
('update ', self.do_update),
('insert into ', self.do_insert),
('delete from', self.do_delete)]
for key, method in methods:
if query.startswith(key):
return method(query[len(key):])
raise SQLException('Unknown SQL (%s)' % query)
###Output
_____no_output_____
###Markdown
Here's an example of how to use the `DB` class:
###Code
some_db = DB()
some_db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
However, at this point, the individual methods for handling SQL statements are not yet defined. Let us do this in the next steps. Excursion: Implementing SQL Statements Selecting DataThe `do_select()` method handles SQL `select` statements to retrieve data from a table.
###Code
class DB(DB):
def do_select(self, query):
FROM, WHERE = ' from ', ' where '
table_start = query.find(FROM)
if table_start < 0:
raise SQLException('no table specified')
where_start = query.find(WHERE)
select = query[:table_start]
if where_start >= 0:
t_name = query[table_start + len(FROM):where_start]
where = query[where_start + len(WHERE):]
else:
t_name = query[table_start + len(FROM):]
where = ''
_, table = self.table(t_name)
if where:
selected = self.expression_clause(table, "(%s)" % where)
selected_rows = [hm for i, data, hm in selected if data]
else:
selected_rows = table
rows = self.expression_clause(selected_rows, "(%s)" % select)
return [data for i, data, hm in rows]
###Output
_____no_output_____
###Markdown
The `expression_clause()` method is used for two purposes:1. In the form `select` $x$, $y$, $z$ `from` $t$, it _evaluates_ (and returns) the expressions $x$, $y$, $z$ in the contexts of the selected rows.2. If a clause `where` $p$ is given, it also evaluates $p$ in the context of the rows and includes the rows in the selection only if $p$ holds.To evaluate expressions like $x$, $y$, $z$ or $p$, the method `expression_clause()` makes use of the Python `eval()` evaluation function.
###Code
class DB(DB):
def expression_clause(self, table, statement):
selected = []
for i, hm in enumerate(table):
selected.append((i, self.my_eval(statement, {}, hm), hm))
return selected
###Output
_____no_output_____
###Markdown
If `eval()` fails for whatever reason, we raise an exception:
###Code
class DB(DB):
def my_eval(self, statement, g, l):
try:
return eval(statement, g, l)
except Exception:
raise SQLException('Invalid WHERE (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
**Note:** Using `eval()` here introduces some important security issues, which we will discuss later in this chapter. Here's how we can use `sql()` to issue a query. Note that the table is yet empty.
###Code
db = sample_db()
db.sql('select year from inventory')
db = sample_db()
db.sql('select year from inventory where year == 2018')
###Output
_____no_output_____
###Markdown
Inserting DataThe `do_insert()` method handles SQL `insert` statements.
###Code
class DB(DB):
def do_insert(self, query):
VALUES = ' values '
table_end = query.find('(')
t_name = query[:table_end].strip()
names_end = query.find(')')
decls, table = self.table(t_name)
names = [i.strip() for i in query[table_end + 1:names_end].split(',')]
# verify columns exist
for k in names:
self.column(decls, k)
values_start = query.find(VALUES)
if values_start < 0:
raise SQLException('Invalid INSERT (%s)' % repr(query))
values = [
i.strip() for i in query[values_start + len(VALUES) + 1:-1].split(',')
]
if len(names) != len(values):
raise SQLException(
'names(%s) != values(%s)' % (repr(names), repr(values)))
# dict lookups happen in C code, so we can't use that
kvs = {}
for k,v in zip(names, values):
for key,kval in decls.items():
if k == key:
kvs[key] = self.convert(kval, v)
table.append(kvs)
###Output
_____no_output_____
###Markdown
In SQL, a column can come in any supported data type. To ensure it is stored using the type originally declared, we need the ability to convert the values to specific types which is provided by `convert()`.
###Code
import ast
class DB(DB):
def convert(self, cast, value):
try:
return cast(ast.literal_eval(value))
except:
raise SQLException('Invalid Conversion %s(%s)' % (cast, value))
###Output
_____no_output_____
###Markdown
Here is an example of how to use the SQL `insert` command:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.table('inventory')
###Output
_____no_output_____
###Markdown
With the database filled, we can also run more complex queries:
###Code
db.sql('select year + 1, kind from inventory')
db.sql('select year, kind from inventory where year == 1997')
###Output
_____no_output_____
###Markdown
Updating DataSimilarly, `do_update()` handles SQL `update` statements.
###Code
class DB(DB):
def do_update(self, query):
SET, WHERE = ' set ', ' where '
table_end = query.find(SET)
if table_end < 0:
raise SQLException('Invalid UPDATE (%s)' % repr(query))
set_end = table_end + 5
t_name = query[:table_end]
decls, table = self.table(t_name)
names_end = query.find(WHERE)
if names_end >= 0:
names = query[set_end:names_end]
where = query[names_end + len(WHERE):]
else:
names = query[set_end:]
where = ''
sets = [[i.strip() for i in name.split('=')]
for name in names.split(',')]
# verify columns exist
for k, v in sets:
self.column(decls, k)
if where:
selected = self.expression_clause(table, "(%s)" % where)
updated = [hm for i, d, hm in selected if d]
else:
updated = table
for hm in updated:
for k, v in sets:
# we can not do dict lookups because it is implemented in C.
for key, kval in decls.items():
if key == k:
hm[key] = self.convert(kval, v)
return "%d records were updated" % len(updated)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can update things:
###Code
db.sql('update inventory set year = 1998 where year == 1997')
db.sql('select year from inventory')
db.table('inventory')
###Output
_____no_output_____
###Markdown
Deleting DataFinally, SQL `delete` statements are handled by `do_delete()`.
###Code
class DB(DB):
def do_delete(self, query):
WHERE = ' where '
table_end = query.find(WHERE)
if table_end < 0:
raise SQLException('Invalid DELETE (%s)' % query)
t_name = query[:table_end].strip()
_, table = self.table(t_name)
where = query[table_end + len(WHERE):]
selected = self.expression_clause(table, "%s" % where)
deleted = [i for i, d, hm in selected if d]
for i in sorted(deleted, reverse=True):
del table[i]
return "%d records were deleted" % len(deleted)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can delete data:
###Code
db.sql('delete from inventory where company == "Ford"')
###Output
_____no_output_____
###Markdown
Our database is now empty:
###Code
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
End of Excursion Here is how our database can be used.
###Code
db = DB()
###Output
_____no_output_____
###Markdown
We first create a table in our database with the correct data types.
###Code
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
###Output
_____no_output_____
###Markdown
Here is a simple convenience function to update the table using our dataset.
###Code
def update_inventory(sqldb, vehicle):
inventory_def = sqldb.db['inventory'][0]
k, v = zip(*inventory_def.items())
val = [repr(cast(val)) for cast, val in zip(v, vehicle.split(','))]
sqldb.sql('insert into inventory (%s) values (%s)' % (','.join(k),
','.join(val)))
for V in VEHICLES:
update_inventory(db, V)
###Output
_____no_output_____
###Markdown
Our database now contains the same dataset as `VEHICLES` under `INVENTORY` table.
###Code
db.db
###Output
_____no_output_____
###Markdown
Here is a sample select statement.
###Code
db.sql('select year,kind from inventory')
db.sql("select company,model from inventory where kind == 'car'")
###Output
_____no_output_____
###Markdown
We can run updates on it.
###Code
db.sql("update inventory set year = 1998, company = 'Suzuki' where kind == 'van'")
db.db
###Output
_____no_output_____
###Markdown
It can even do mathematics on the fly!
###Code
db.sql('select int(year)+10 from inventory')
###Output
_____no_output_____
###Markdown
Adding a new row to our table.
###Code
db.sql("insert into inventory (year, kind, company, model) values (1, 'charriot', 'Rome', 'Quadriga')")
db.db
###Output
_____no_output_____
###Markdown
Which we then delete.
###Code
db.sql("delete from inventory where year < 1900")
###Output
_____no_output_____
###Markdown
Fuzzing SQLTo verify that everything is OK, let us fuzz. First we define our grammar. Excursion: Defining a SQL grammar
###Code
import string
from Grammars import START_SYMBOL, Grammar, Expansion, \
is_valid_grammar, extend_grammar
EXPR_GRAMMAR: Grammar = {
"<start>": ["<expr>"],
"<expr>": ["<bexpr>", "<aexpr>", "(<expr>)", "<term>"],
"<bexpr>": [
"<aexpr><lt><aexpr>",
"<aexpr><gt><aexpr>",
"<expr>==<expr>",
"<expr>!=<expr>",
],
"<aexpr>": [
"<aexpr>+<aexpr>", "<aexpr>-<aexpr>", "<aexpr>*<aexpr>",
"<aexpr>/<aexpr>", "<word>(<exprs>)", "<expr>"
],
"<exprs>": ["<expr>,<exprs>", "<expr>"],
"<lt>": ["<"],
"<gt>": [">"],
"<term>": ["<number>", "<word>"],
"<number>": ["<integer>.<integer>", "<integer>", "-<number>"],
"<integer>": ["<digit><integer>", "<digit>"],
"<word>": ["<word><letter>", "<word><digit>", "<letter>"],
"<digit>":
list(string.digits),
"<letter>":
list(string.ascii_letters + '_:.')
}
assert is_valid_grammar(EXPR_GRAMMAR)
PRINTABLE_CHARS: List[str] = [i for i in string.printable
if i not in "<>'\"\t\n\r\x0b\x0c\x00"] + ['<lt>', '<gt>']
INVENTORY_GRAMMAR = extend_grammar(EXPR_GRAMMAR,
{
'<start>': ['<query>'],
'<query>': [
'select <exprs> from <table>',
'select <exprs> from <table> where <bexpr>',
'insert into <table> (<names>) values (<literals>)',
'update <table> set <assignments> where <bexpr>',
'delete from <table> where <bexpr>',
],
'<table>': ['<word>'],
'<names>': ['<column>,<names>', '<column>'],
'<column>': ['<word>'],
'<literals>': ['<literal>', '<literal>,<literals>'],
'<literal>': ['<number>', "'<chars>'"],
'<assignments>': ['<kvp>,<assignments>', '<kvp>'],
'<kvp>': ['<column>=<value>'],
'<value>': ['<word>'],
'<chars>': ['<char>', '<char><chars>'],
'<char>': PRINTABLE_CHARS, # type: ignore
})
assert is_valid_grammar(INVENTORY_GRAMMAR)
###Output
_____no_output_____
###Markdown
As can be seen from the source of our database, the functions always check whether the table name is correct. Hence, we modify the grammar to choose our particular table so that it will have a better chance of reaching deeper. We will see in the later sections how this can be done automatically.
###Code
INVENTORY_GRAMMAR_F = extend_grammar(INVENTORY_GRAMMAR,
{'<table>': ['inventory']})
###Output
_____no_output_____
###Markdown
End of Excursion
###Code
from GrammarFuzzer import GrammarFuzzer
gf = GrammarFuzzer(INVENTORY_GRAMMAR_F)
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = db.sql(query)
print(repr(res))
except SQLException as e:
print("> ", e)
pass
except:
traceback.print_exc()
break
print()
###Output
'select O6fo,-977091.1,-36.46 from inventory'
> Invalid WHERE ('(O6fo,-977091.1,-36.46)')
'select g3 from inventory where -3.0!=V/g/b+Q*M*G'
> Invalid WHERE ('(-3.0!=V/g/b+Q*M*G)')
'update inventory set z=a,x=F_,Q=K where p(M)<_*S'
> Column ('z') was not found
'update inventory set R=L5pk where e*l*y-u>K+U(:)'
> Column ('R') was not found
'select _/d*Q+H/d(k)<t+M-A+P from inventory'
> Invalid WHERE ('(_/d*Q+H/d(k)<t+M-A+P)')
'select F5 from inventory'
> Invalid WHERE ('(F5)')
'update inventory set jWh.=a6 where wcY(M)>IB7(i)'
> Column ('jWh.') was not found
'update inventory set U=y where L(W<c,(U!=W))<V(((q)==m<F),O,l)'
> Column ('U') was not found
'delete from inventory where M/b-O*h*E<H-W>e(Y)-P'
> Invalid WHERE ('M/b-O*h*E<H-W>e(Y)-P')
'select ((kP(86)+b*S+J/Z/U+i(U))) from inventory'
> Invalid WHERE ('(((kP(86)+b*S+J/Z/U+i(U))))')
###Markdown
Fuzzing does not seem to have triggered any crashes. However, are crashes the only errors that we should be worried about? The Evil of EvalIn our database implementation – notably in the `expression_clause()` method -, we have made use of `eval()` to evaluate expressions using the Python interpreter. This allows us to unleash the full power of Python expressions within our SQL statements.
###Code
db.sql('select year from inventory where year < 2000')
###Output
_____no_output_____
###Markdown
In the above query, the clause `year < 2000` is evaluated using `expression_clause()` using Python in the context of each row; hence, `year < 2000` evaluates to either `True` or `False`. The same holds for the expressions being `select`ed:
###Code
db.sql('select year - 1900 if year < 2000 else year - 2000 from inventory')
###Output
_____no_output_____
###Markdown
This works because `year - 1900 if year < 2000 else year - 2000` is a valid Python expression. (It is not a valid SQL expression, though.) The problem with the above is that there is _no limitation_ to what the Python expression can do. What if the user tries the following?
###Code
db.sql('select __import__("os").popen("pwd").read() from inventory')
###Output
_____no_output_____
###Markdown
The above statement effectively reads from the users' file system. Instead of `os.popen("pwd").read()`, it could execute arbitrary Python commands – to access data, install software, run a background process. This is where "the full power of Python expressions" turns back on us. What we want is to allow our _program_ to make full use of its power; yet, the _user_ (or any third party) should not be entrusted to do the same. Hence, we need to differentiate between (trusted) _input from the program_ and (untrusted) _input from the user_. One method that allows such differentiation is that of *dynamic taint analysis*. The idea is to identify the functions that accept user input as *sources* that *taint* any string that comes in through them, and those functions that perform dangerous operations as *sinks*. Finally we bless certain functions as *taint sanitizers*. The idea is that an input from the source should never reach the sink without undergoing sanitization first. This allows us to use a stronger oracle than simply checking for crashes. Tracking String TaintsThere are various levels of taint tracking that one can perform. The simplest is to track that a string fragment originated in a specific environment, and has not undergone a taint removal process. For this, we simply need to wrap the original string with an environment identifier (the _taint_) with `tstr`, and produce `tstr` instances on each operation that results in another string fragment. The attribute `taint` holds a label identifying the environment this instance was derived. A Class for Tainted StringsFor capturing information flows we need a new string class. The idea is to use the new tainted string class `tstr` as a wrapper on the original `str` class. However, `str` is an *immutable* class. Hence, it does not call its `__init__()` method after being constructed. This means that any subclasses of `str` also will not get the `__init__()` method called. If we want to get our initialization routine called, we need to [hook into `__new__()`](https://docs.python.org/3/reference/datamodel.htmlbasic-customization) and return an instance of our own class. We combine this with our initialization code in `__init__()`.
###Code
class tstr(str):
"""Wrapper for strings, saving taint information"""
def __new__(cls, value, *args, **kw):
"""Create a tstr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `tstr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings."""
self.taint: Any = taint
class tstr(tstr):
def __repr__(self) -> tstr:
"""Return a representation."""
return tstr(str.__repr__(self), taint=self.taint)
class tstr(tstr):
def __str__(self) -> str:
"""Convert to string"""
return str.__str__(self)
###Output
_____no_output_____
###Markdown
For example, if we wrap `"hello"` in `tstr`, then we should be able to access its taint:
###Code
thello: tstr = tstr('hello', taint='LOW')
thello.taint
repr(thello).taint # type: ignore
###Output
_____no_output_____
###Markdown
By default, when we wrap a string, it is tainted. Hence we also need a way to clear the taint in the string. One way is to simply return a `str` instance as above. However, one may sometimes wish to remove the taint from an existing instance. This is accomplished with `clear_taint()`. During `clear_taint()`, we simply set the taint to `None`. This method comes with a pair method `has_taint()` which checks whether a `tstr` instance is currently origined.
###Code
class tstr(tstr):
def clear_taint(self):
"""Remove taint"""
self.taint = None
return self
def has_taint(self):
"""Check if taint is present"""
return self.taint is not None
###Output
_____no_output_____
###Markdown
String OperatorsTo propagate the taint, we have to extend string functions, such as operators. We can do so in one single big step, overloading all string methods and operators. When we create a new string from an existing tainted string, we propagate its taint.
###Code
class tstr(tstr):
def create(self, s):
return tstr(s, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_str_wrapper()` function creates a wrapper around an existing string method which attaches the taint to the result of the method:
###Code
class tstr(tstr):
@staticmethod
def make_str_wrapper(fun):
"""Make `fun` (a `str` method) a method in `tstr`"""
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
return self.create(res)
if hasattr(fun, '__doc__'):
# Copy docstring
proxy.__doc__ = fun.__doc__
return proxy
###Output
_____no_output_____
###Markdown
We do this for all string methods that return a string:
###Code
def informationflow_init_1():
for name in ['__format__', '__mod__', '__rmod__', '__getitem__',
'__add__', '__mul__', '__rmul__',
'capitalize', 'casefold', 'center', 'encode',
'expandtabs', 'format', 'format_map', 'join',
'ljust', 'lower', 'lstrip', 'replace',
'rjust', 'rstrip', 'strip', 'swapcase', 'title', 'translate', 'upper']:
fun = getattr(str, name)
setattr(tstr, name, tstr.make_str_wrapper(fun))
informationflow_init_1()
INITIALIZER_LIST = [informationflow_init_1]
def initialize():
for fn in INITIALIZER_LIST:
fn()
###Output
_____no_output_____
###Markdown
The one missing operator is `+` with a regular string on the left side and a tainted string on the right side. Python supports a `__radd__()` method which is invoked if the associated object is used on the right side of an addition.
###Code
class tstr(tstr):
def __radd__(self, value):
"""Return value + self, as a `tstr` object"""
return self.create(value + str(self))
###Output
_____no_output_____
###Markdown
With this, we are already done. Let us create a string `thello` with a taint `LOW`.
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any substring will also be tainted:
###Code
thello[0].taint # type: ignore
thello[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
String additions will return a `tstr` object with the taint:
###Code
(tstr('foo', taint='HIGH') + 'bar').taint # type: ignore
###Output
_____no_output_____
###Markdown
Our `__radd__()` method ensures this also works if the `tstr` occurs on the right side of a string addition:
###Code
('foo' + tstr('bar', taint='HIGH')).taint # type: ignore
thello += ', world' # type: ignore
thello.taint # type: ignore
###Output
_____no_output_____
###Markdown
Other operators such as multiplication also work:
###Code
(thello * 5).taint # type: ignore
('hw %s' % thello).taint # type: ignore
(tstr('hello %s', taint='HIGH') % 'world').taint # type: ignore
###Output
_____no_output_____
###Markdown
Tracking Untrusted InputSo, what can one do with tainted strings? We reconsider the `DB` example. We define a "better" `TrustedDB` which only accepts strings tainted as `"TRUSTED"`.
###Code
class TrustedDB(DB):
def sql(self, s):
assert isinstance(s, tstr), "Need a tainted string"
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
return super().sql(s)
###Output
_____no_output_____
###Markdown
Feeding a string with an "unknown" (i.e., non-existing) trust level will cause `TrustedDB` to fail:
###Code
bdb = TrustedDB(db.db)
from ExpectError import ExpectError
with ExpectError():
bdb.sql("select year from INVENTORY")
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_13168/3935989889.py", line 2, in <module>
bdb.sql("select year from INVENTORY")
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_13168/995123203.py", line 3, in sql
assert isinstance(s, tstr), "Need a tainted string"
AssertionError: Need a tainted string (expected)
###Markdown
Additionally any user input would be originally tagged with `"UNTRUSTED"` as taint. If we place an untrusted string into our better calculator, it will also fail:
###Code
bad_user_input = tstr('__import__("os").popen("ls").read()', taint='UNTRUSTED')
with ExpectError():
bdb.sql(bad_user_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_13168/3307042773.py", line 3, in <module>
bdb.sql(bad_user_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_13168/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
Hence, somewhere along the computation, we have to turn the "untrusted" inputs into "trusted" strings. This process is called *sanitization*. A simple sanitization function for our purposes could ensure that the input consists only of few allowed characters (not including letters or quotes); if this is the case, then the input gets a new `"TRUSTED"` taint. If not, we turn the string into an (untrusted) empty string; other alternatives would be to raise an error or to escape or delete "untrusted" characters.
###Code
import re
def sanitize(user_input):
assert isinstance(user_input, tstr)
if re.match(
r'^select +[-a-zA-Z0-9_, ()]+ from +[-a-zA-Z0-9_, ()]+$', user_input):
return tstr(user_input, taint='TRUSTED')
else:
return tstr('', taint='UNTRUSTED')
good_user_input = tstr("select year,model from inventory", taint='UNTRUSTED')
sanitized_input = sanitize(good_user_input)
sanitized_input
sanitized_input.taint
bdb.sql(sanitized_input)
###Output
_____no_output_____
###Markdown
Let us now try out our untrusted input:
###Code
sanitized_input = sanitize(bad_user_input)
sanitized_input
sanitized_input.taint
with ExpectError():
bdb.sql(sanitized_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_13168/249000876.py", line 2, in <module>
bdb.sql(sanitized_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_13168/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
In a similar fashion, we can prevent SQL and code injections discussed in [the chapter on Web fuzzing](WebFuzzer.ipynb). Taint Aware FuzzingWe can also use tainting to _direct fuzzing to those grammar rules that are likely to generate dangerous inputs._ The idea here is to identify inputs generated by our fuzzer that lead to untrusted execution. First we define the exception to be thrown when a tainted value reaches a dangerous operation.
###Code
class Tainted(Exception):
def __init__(self, v):
self.v = v
def __str__(self):
return 'Tainted[%s]' % self.v
###Output
_____no_output_____
###Markdown
TaintedDBNext, since `my_eval()` is the most dangerous operation in the `DB` class, we define a new class `TaintedDB` that overrides the `my_eval()` to throw an exception whenever an untrusted string reaches this part.
###Code
class TaintedDB(DB):
def my_eval(self, statement, g, l):
if statement.taint != 'TRUSTED':
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
We initialize an instance of `TaintedDB`
###Code
tdb = TaintedDB()
tdb.db = db.db
###Output
_____no_output_____
###Markdown
Then we start fuzzing.
###Code
import traceback
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = tdb.sql(tstr(query, taint='UNTRUSTED'))
print(repr(res))
except SQLException as e:
pass
except Tainted as e:
print("> ", e)
except:
traceback.print_exc()
break
print()
###Output
'delete from inventory where y/u-l+f/y<Y(c)/A-H*q'
> Tainted[y/u-l+f/y<Y(c)/A-H*q]
"insert into inventory (G,Wmp,sl3hku3) values ('<','?')"
"insert into inventory (d0) values (',_G')"
'select P*Q-w/x from inventory where X<j==:==j*r-f'
> Tainted[(X<j==:==j*r-f)]
'select a>F*i from inventory where Q/I-_+P*j>.'
> Tainted[(Q/I-_+P*j>.)]
'select (V-i<T/g) from inventory where T/r/G<FK(m)/(i)'
> Tainted[(T/r/G<FK(m)/(i))]
'select (((i))),_(S,_)/L-k<H(Sv,R,n,W,Y) from inventory'
> Tainted[((((i))),_(S,_)/L-k<H(Sv,R,n,W,Y))]
'select (N==c*U/P/y),i-e/n*y,T!=w,u from inventory'
> Tainted[((N==c*U/P/y),i-e/n*y,T!=w,u)]
'update inventory set _=B,n=v where o-p*k-J>T'
'select s from inventory where w4g4<.m(_)/_>t'
> Tainted[(w4g4<.m(_)/_>t)]
###Markdown
One can see that `insert`, `update`, `select` and `delete` statements on an existing table lead to taint exceptions. We can now focus on these specific kinds of inputs. However, this is not the only thing we can do. We will see how we can identify specific portions of input that reached tainted execution using character origins in the later sections. But before that, we explore other uses of taints. Preventing Privacy LeaksUsing taints, we can also ensure that secret information does not leak out. We can assign a special taint `"SECRET"` to strings whose information must not leak out:
###Code
secrets = tstr('<Plenty of secret keys>', taint='SECRET')
###Output
_____no_output_____
###Markdown
Accessing any substring of `secrets` will propagate the taint:
###Code
secrets[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
Consider the _heartbeat_ security leak from [the chapter on Fuzzing](Fuzzer.ipynb), in which a server would accidentally reply not only the user input sent to it, but also secret memory. If the reply consists only of the user input, there is no taint associated with it:
###Code
user_input = "hello"
reply = user_input
isinstance(reply, tstr)
###Output
_____no_output_____
###Markdown
If, however, the reply contains _any_ part of the secret, the reply will be tainted:
###Code
reply = user_input + secrets[0:5]
reply
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
The output function of our server would now ensure that the data sent back does not contain any secret information:
###Code
def send_back(s):
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
...
with ExpectError():
send_back(reply)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_13168/3747050841.py", line 2, in <module>
send_back(reply)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_13168/3158733057.py", line 2, in send_back
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
AssertionError (expected)
###Markdown
Our `tstr` solution can help to identify information leaks – but it is by no means complete. If we actually take the `heartbeat()` implementation from [the chapter on Fuzzing](Fuzzer.ipynb), we will see that _any_ reply is marked as `SECRET` – even those not even accessing secret memory:
###Code
from Fuzzer import heartbeat
reply = heartbeat('hello', 5, memory=secrets)
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
Why is this? If we look into the implementation of `heartbeat()`, we will see that it first builds a long string `memory` from the (non-secret) reply and the (secret) memory, before returning the first characters from `memory`.```python Store reply in memory memory = reply + memory[len(reply):]```At this point, the whole memory still is tainted as `SECRET`, _including_ the non-secret part from `reply`. We may be able to circumvent the issue by tagging the `reply` as `PUBLIC` – but then, this taint would be in conflict with the `SECRET` tag of `memory`. What happens if we compose a string from two differently tainted strings?
###Code
thilo = tstr("High", taint='HIGH') + tstr("Low", taint='LOW')
###Output
_____no_output_____
###Markdown
It turns out that in this case, the `__add__()` method takes precedence over the `__radd__()` method, which means that the right-hand `"Low"` string is treated as a regular (non-tainted) string.
###Code
thilo
thilo.taint # type: ignore
###Output
_____no_output_____
###Markdown
We could set up the `__add__()` and other methods with special handling for conflicting taints. However, the way this conflict should be resolved would be highly _application-dependent_:* If we use taints to indicate _privacy levels_, `SECRET` privacy should take precedence over `PUBLIC` privacy. Any combination of a `SECRET`-tainted string and a `PUBLIC`-tainted string thus should have a `SECRET` taint.* If we use taints to indicate _origins_ of information, an `UNTRUSTED` origin should take precedence over a `TRUSTED` origin. Any combination of an `UNTRUSTED`-tainted string and a `TRUSTED`-tainted string thus should have an `UNTRUSTED` taint.Of course, such conflict resolutions can be implemented. But even so, they will not help us in the `heartbeat()` example differentiating secret from non-secret output data. Tracking Individual CharactersFortunately, there is a better, more generic way to solve the above problems. The key to composition of differently tainted strings is to assign taints not only to strings, but actually to every bit of information – in our case, characters. If every character has a taint on its own, a new composition of characters will simply inherit this very taint _per character_. To this end, we introduce a second bit of information named _origin_. Distinguishing various untrusted sources may be accomplished by origining each instance as separate instance (called *colors* in dynamic origin research). You will see an instance of this technique in the chapter on [Grammar Mining](GrammarMiner.ipynb). In this section, we carry *character level* origins. That is, given a fragment that resulted from a portion of the original origined string, one will be able to tell which portion of the input string the fragment was taken from. In essence, each input character index from an origined source gets its own color. More complex origining such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter. A Class for Tracking Character OriginsLet us introduce a class `ostr` which, like `tstr`, carries a taint for each string, and additionally an _origin_ for each character that indicates its source. It is a consecutive number in a particular range (by default, starting with zero) indicating its _position_ within a specific origin.
###Code
class ostr(str):
"""Wrapper for strings, saving taint and origin information"""
DEFAULT_ORIGIN = 0
def __new__(cls, value, *args, **kw):
"""Create an ostr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None,
origin: Optional[Union[int, List[int]]] = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `ostr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings.
`origin` (optional) is either
- an integer denoting the index of the first character in `value`, or
- a list of integers denoting the origins of the characters in `value`,
"""
self.taint = taint
if origin is None:
origin = ostr.DEFAULT_ORIGIN
if isinstance(origin, int):
self.origin = list(range(origin, origin + len(self)))
else:
self.origin = origin
assert len(self.origin) == len(self)
###Output
_____no_output_____
###Markdown
As with `tstr`, above, we implement methods for conversion into (regular) Python strings:
###Code
class ostr(ostr):
def create(self, s):
return ostr(s, taint=self.taint, origin=self.origin)
class ostr(ostr):
UNKNOWN_ORIGIN = -1
def __repr__(self):
# handle escaped chars
origin = [ostr.UNKNOWN_ORIGIN]
for s, o in zip(str(self), self.origin):
origin.extend([o] * (len(repr(s)) - 2))
origin.append(ostr.UNKNOWN_ORIGIN)
return ostr(str.__repr__(self), taint=self.taint, origin=origin)
class ostr(ostr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
By default, character origins start with `0`:
###Code
othello = ostr('hello')
assert othello.origin == [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
We can also specify the starting origin as below -- `6..10`
###Code
tworld = ostr('world', origin=6)
assert tworld.origin == [6, 7, 8, 9, 10]
a = ostr("hello\tworld")
repr(a).origin # type: ignore
###Output
_____no_output_____
###Markdown
`str()` returns a `str` instance without origin or taint information:
###Code
assert type(str(othello)) == str
###Output
_____no_output_____
###Markdown
`repr()`, however, keeps the origin information for the original string:
###Code
repr(othello)
repr(othello).origin # type: ignore
###Output
_____no_output_____
###Markdown
Just as with taints, we can clear origins and check whether an origin is present:
###Code
class ostr(ostr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
class ostr(ostr):
def clear_origin(self):
self.origin = [self.UNKNOWN_ORIGIN] * len(self)
return self
def has_origin(self):
return any(origin != self.UNKNOWN_ORIGIN for origin in self.origin)
othello = ostr('Hello')
assert othello.has_origin()
othello.clear_origin()
assert not othello.has_origin()
###Output
_____no_output_____
###Markdown
In the remainder of this section, we re-implement various string methods such that they also keep track of origins. If this is too tedious for you, jump right [to the next section](Checking-Origins) which gives a number of usage examples. Excursion: Implementing String Methods CreateWe need to create new substrings that are wrapped in `ostr` objects. However, we also want to allow our subclasses to create their own instances. Hence we again provide a `create()` method that produces a new `ostr` instance.
###Code
class ostr(ostr):
def create(self, res, origin=None):
return ostr(res, taint=self.taint, origin=origin)
othello = ostr('hello', taint='HIGH')
otworld = othello.create('world', origin=6)
otworld.origin
otworld.taint
assert (othello.origin, otworld.origin) == (
[0, 1, 2, 3, 4], [6, 7, 8, 9, 10])
###Output
_____no_output_____
###Markdown
IndexIn Python, indexing is provided through `__getitem__()`. Indexing on positive integers is simple enough. However, it has two additional wrinkles. The first is that, if the index is negative, that many characters are counted from the end of the string which lies just after the last character. That is, the last character has a negative index `-1`
###Code
class ostr(ostr):
def __getitem__(self, key):
res = super().__getitem__(key)
if isinstance(key, int):
key = len(self) + key if key < 0 else key
return self.create(res, [self.origin[key]])
elif isinstance(key, slice):
return self.create(res, self.origin[key])
else:
assert False
ohello = ostr('hello', taint='HIGH')
assert (ohello[0], ohello[-1]) == ('h', 'o')
ohello[0].taint
###Output
_____no_output_____
###Markdown
The other wrinkle is that `__getitem__()` can accept a slice. We discuss this next. SlicesThe Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method, which returns a custom `iterator`.
###Code
class ostr(ostr):
def __iter__(self):
return ostr_iterator(self)
###Output
_____no_output_____
###Markdown
The `__iter__()` method requires a supporting `iterator` object. The `iterator` is used to save the state of the current iteration, which it does by keeping a reference to the original `ostr`, and the current index of iteration `_str_idx`.
###Code
class ostr_iterator():
def __init__(self, ostr):
self._ostr = ostr
self._str_idx = 0
def __next__(self):
if self._str_idx == len(self._ostr):
raise StopIteration
# calls ostr getitem should be ostr
c = self._ostr[self._str_idx]
assert isinstance(c, ostr)
self._str_idx += 1
return c
###Output
_____no_output_____
###Markdown
Bringing all these together:
###Code
thw = ostr('hello world', taint='HIGH')
thw[0:5]
assert thw[0:5].has_taint()
assert thw[0:5].has_origin()
thw[0:5].taint
thw[0:5].origin
###Output
_____no_output_____
###Markdown
Splits
###Code
def make_split_wrapper(fun):
def proxy(self, *args, **kwargs):
lst = fun(self, *args, **kwargs)
return [self.create(elem) for elem in lst]
return proxy
for name in ['split', 'rsplit', 'splitlines']:
fun = getattr(str, name)
setattr(ostr, name, make_split_wrapper(fun))
othello = ostr('hello world', taint='LOW')
othello == 'hello world'
othello.split()[0].taint # type: ignore
###Output
_____no_output_____
###Markdown
(Exercise for the reader: handle _partitions_, i.e., splitting a string by substrings) ConcatenationIf two origined strings are concatenated together, it may be desirable to transfer the origins from each to the corresponding portion of the resulting string. The concatenation of strings is accomplished by overriding `__add__()`.
###Code
class ostr(ostr):
def __add__(self, other):
if isinstance(other, ostr):
return self.create(str.__add__(self, other),
(self.origin + other.origin))
else:
return self.create(str.__add__(self, other),
(self.origin + [self.UNKNOWN_ORIGIN for i in other]))
###Output
_____no_output_____
###Markdown
Testing concatenations between two `ostr` instances:
###Code
othello = ostr("hello")
otworld = ostr("world", origin=6)
othw = othello + otworld
assert othw.origin == [0, 1, 2, 3, 4, 6, 7, 8, 9, 10] # type: ignore
###Output
_____no_output_____
###Markdown
What if a `ostr` is concatenated with a `str`?
###Code
space = " "
th_w = othello + space + otworld
assert th_w.origin == [
0,
1,
2,
3,
4,
ostr.UNKNOWN_ORIGIN,
ostr.UNKNOWN_ORIGIN,
6,
7,
8,
9,
10]
###Output
_____no_output_____
###Markdown
One wrinkle here is that when adding a `ostr` and a `str`, the user may place the `str` first, in which case, the `__add__()` method will be called on the `str` instance. Not on the `ostr` instance. However, Python provides a solution. If one defines `__radd__()` on the `ostr` instance, that method will be called rather than `str.__add__()`
###Code
class ostr(ostr):
def __radd__(self, other):
origin = other.origin if isinstance(other, ostr) else [
self.UNKNOWN_ORIGIN for i in other]
return self.create(str.__add__(other, self), (origin + self.origin))
###Output
_____no_output_____
###Markdown
We test it out:
###Code
shello = "hello"
otworld = ostr("world")
thw = shello + otworld
assert thw.origin == [ostr.UNKNOWN_ORIGIN] * len(shello) + [0, 1, 2, 3, 4] # type: ignore
###Output
_____no_output_____
###Markdown
These methods: `slicing` and `concatenation` is sufficient to implement other string methods that result in a string, and does not change the character underneath (i.e no case change). Hence, we look at a helper method next. Extract Origin StringGiven a specific input index, the method `x()` extracts the corresponding origined portion from a `ostr`. As a convenience it supports `slices` along with `ints`.
###Code
class ostr(ostr):
class TaintException(Exception):
pass
def x(self, i=0):
"""Extract substring at index/slice `i`"""
if not self.origin:
raise origin.TaintException('Invalid request idx')
if isinstance(i, int):
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j == i]]
elif isinstance(i, slice):
r = range(i.start or 0, i.stop or len(self), i.step or 1)
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j in r]]
thw = ostr('hello world', origin=100)
assert thw.x(101) == ['e']
assert thw.x(slice(101, 105)) == ['e', 'l', 'l', 'o']
###Output
_____no_output_____
###Markdown
Replace The `replace()` method replaces a portion of the string with another.
###Code
class ostr(ostr):
def replace(self, a, b, n=None):
old_origin = self.origin
b_origin = b.origin if isinstance(
b, ostr) else [self.UNKNOWN_ORIGIN] * len(b)
mystr = str(self)
i = 0
while True:
if n and i >= n:
break
idx = mystr.find(a)
if idx == -1:
break
last = idx + len(a)
mystr = mystr.replace(a, b, 1)
partA, partB = old_origin[0:idx], old_origin[last:]
old_origin = partA + b_origin + partB
i += 1
return self.create(mystr, old_origin)
my_str = ostr("aa cde aa")
res = my_str.replace('aa', 'bb')
assert res, res.origin == ('bb', 'cde', 'bb',
[ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN,
2, 3, 4, 5, 6,
ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN])
my_str = ostr("aa cde aa")
res = my_str.replace('aa', ostr('bb', origin=100))
assert (
res, res.origin) == (
('bb cde bb'), [
100, 101, 2, 3, 4, 5, 6, 100, 101])
###Output
_____no_output_____
###Markdown
Split We essentially have to re-implement split operations, and split by space is slightly different from other splits.
###Code
class ostr(ostr):
def _split_helper(self, sep, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = len(sep)
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
first_idx = last_idx + sep_len
return result_list
def _split_space(self, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = 0
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
v = str(self[last_idx:])
sep_len = len(v) - len(v.lstrip(' '))
first_idx = last_idx + sep_len
return result_list
def rsplit(self, sep=None, maxsplit=-1):
splitted = super().rsplit(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
def split(self, sep=None, maxsplit=-1):
splitted = super().split(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
my_str = ostr('ab cdef ghij kl')
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([0, 1], [3, 4, 5, 6], [8, 9, 10, 11], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 15)))
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert(ab.origin, cdef.origin, kl.origin) == ([0, 1], [3, 4, 5, 6], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=100, taint='HIGH')
ab, cdef, ghij, kl = my_str.rsplit()
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([100, 101], [105, 106, 107, 108], [110, 111, 112, 113],
[118, 119])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 20)), taint='HIGH')
ab, cdef, ghij, kl = my_str.split()
assert (ab.origin, cdef.origin, kl.origin) == ([0, 1], [5, 6, 7, 8], [18, 19])
assert ab.taint == 'HIGH'
###Output
_____no_output_____
###Markdown
Strip
###Code
class ostr(ostr):
def strip(self, cl=None):
return self.lstrip(cl).rstrip(cl)
def lstrip(self, cl=None):
res = super().lstrip(cl)
i = self.find(res)
return self[i:]
def rstrip(self, cl=None):
res = super().rstrip(cl)
return self[0:len(res)]
my_str1 = ostr(" abc ")
v = my_str1.strip()
assert v, v.origin == ('abc', [2, 3, 4])
my_str1 = ostr(" abc ")
v = my_str1.lstrip()
assert (v, v.origin) == ('abc ', [2, 3, 4, 5, 6])
my_str1 = ostr(" abc ")
v = my_str1.rstrip()
assert (v, v.origin) == (' abc', [0, 1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
Expand Tabs
###Code
class ostr(ostr):
def expandtabs(self, n=8):
parts = self.split('\t')
res = super().expandtabs(n)
all_parts = []
for i, p in enumerate(parts):
all_parts.extend(p.origin)
if i < len(parts) - 1:
l = len(all_parts) % n
all_parts.extend([p.origin[-1]] * l)
return self.create(res, all_parts)
my_s = str("ab\tcd")
my_ostr = ostr("ab\tcd")
v1 = my_s.expandtabs(4)
v2 = my_ostr.expandtabs(4)
assert str(v1) == str(v2)
assert (len(v1), repr(v2), v2.origin) == (6, "'ab cd'", [0, 1, 1, 1, 3, 4])
class ostr(ostr):
def join(self, iterable):
mystr = ''
myorigin = []
sep_origin = self.origin
lst = list(iterable)
for i, s in enumerate(lst):
sorigin = s.origin if isinstance(s, ostr) else [
self.UNKNOWN_ORIGIN] * len(s)
myorigin.extend(sorigin)
mystr += str(s)
if i < len(lst) - 1:
myorigin.extend(sep_origin)
mystr += str(self)
res = super().join(iterable)
assert len(res) == len(mystr)
return self.create(res, myorigin)
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr('').join([v2, v3, v1])
assert (
v4, v4.origin) == (
'cdefab', [
103, 104, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 100, 101])
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr(',').join([v2, v3, v1])
assert (v4, v4.origin) == ('cd,ef,ab',
[103, 104, 0, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 0, 100, 101]) # type: ignore
###Output
_____no_output_____
###Markdown
Partitions
###Code
class ostr(ostr):
def partition(self, sep):
partA, sep, partB = super().partition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
def rpartition(self, sep):
partA, sep, partB = super().rpartition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
###Output
_____no_output_____
###Markdown
Justify
###Code
class ostr(ostr):
def ljust(self, width, fillchar=' '):
res = super().ljust(width, fillchar)
initial = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, [t] * initial + self.origin)
class ostr(ostr):
def rjust(self, width, fillchar=' '):
res = super().rjust(width, fillchar)
final = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, self.origin + [t] * final)
###Output
_____no_output_____
###Markdown
mod
###Code
class ostr(ostr):
def __mod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
s_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = self.find('%s')
assert i >= 0
res = super().__mod__(s)
r_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
class ostr(ostr):
def __rmod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
r_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = s.find('%s')
assert i >= 0
res = super().__rmod__(s)
s_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
a = ostr('hello %s world', origin=100)
a
(a % 'good').origin
b = 'hello %s world'
c = ostr('bad', origin=10)
(b % c).origin
###Output
_____no_output_____
###Markdown
String methods that do not change origin
###Code
class ostr(ostr):
def swapcase(self):
return self.create(str(self).swapcase(), self.origin)
def upper(self):
return self.create(str(self).upper(), self.origin)
def lower(self):
return self.create(str(self).lower(), self.origin)
def capitalize(self):
return self.create(str(self).capitalize(), self.origin)
def title(self):
return self.create(str(self).title(), self.origin)
a = ostr('aa', origin=100).upper()
a, a.origin
###Output
_____no_output_____
###Markdown
General wrappers These are not strictly needed for operation, but can be useful for tracing.
###Code
def make_basic_str_wrapper(fun): # type: ignore
def proxy(*args, **kwargs):
res = fun(*args, **kwargs)
return res
return proxy
import inspect
import types
def informationflow_init_2():
ostr_members = [name for name, fn in inspect.getmembers(ostr, callable)
if isinstance(fn, types.FunctionType) and fn.__qualname__.startswith('ostr')]
for name, fn in inspect.getmembers(str, callable):
if name not in set(['__class__', '__new__', '__str__', '__init__',
'__repr__', '__getattribute__']) | set(ostr_members):
setattr(ostr, name, make_basic_str_wrapper(fn))
informationflow_init_2()
INITIALIZER_LIST.append(informationflow_init_2)
###Output
_____no_output_____
###Markdown
Methods yet to be translated These methods generate strings from other strings. However, we do not have the right implementations for any of these. Hence these are marked as dangerous until we can generate the right translations.
###Code
def make_str_abort_wrapper(fun):
def proxy(*args, **kwargs):
raise ostr.TaintException(
'%s Not implemented in `ostr`' %
fun.__name__)
return proxy
def informationflow_init_3():
for name, fn in inspect.getmembers(str, callable):
# Omitted 'splitlines' as this is needed for formatting output in
# IPython/Jupyter
if name in ['__format__', 'format_map', 'format',
'__mul__', '__rmul__', 'center', 'zfill', 'decode', 'encode']:
setattr(ostr, name, make_str_abort_wrapper(fn))
informationflow_init_3()
INITIALIZER_LIST.append(informationflow_init_3)
###Output
_____no_output_____
###Markdown
While generating proxy wrappers for string operations can handle most common cases of transmission of information flow, some of the operations involving strings can not be overridden. For example, consider the following. End of Excursion Checking OriginsWith all this implemented, we now have full-fledged `ostr` strings where we can easily check the origin of each and every character. To check whether a string originates from another string, we can convert the origin to a set and resort to standard set operations:
###Code
s = ostr("hello", origin=100)
s[1]
s[1].origin
set(s[1].origin) <= set(s.origin)
t = ostr("world", origin=200)
set(s.origin) <= set(t.origin)
u = s + t + "!"
u.origin
ostr.UNKNOWN_ORIGIN in u.origin
###Output
_____no_output_____
###Markdown
Privacy Leaks RevisitedLet us apply it to see whether we can come up with a satisfactory solution for checking the `heartbeat()` function against information leakage.
###Code
SECRET_ORIGIN = 1000
###Output
_____no_output_____
###Markdown
We define a "secret" that must not leak out:
###Code
secret = ostr('<again, some super-secret input>', origin=SECRET_ORIGIN)
###Output
_____no_output_____
###Markdown
Each and every character in `secret` has an origin starting with `SECRET_ORIGIN`:
###Code
print(secret.origin)
###Output
[1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031]
###Markdown
If we now invoke `heartbeat()` with a given string, the origin of the reply should all be `UNKNOWN_ORIGIN` (from the input), and none of the characters should have a `SECRET_ORIGIN`.
###Code
hello_s = heartbeat('hello', 5, memory=secret)
hello_s
assert isinstance(hello_s, ostr)
print(hello_s.origin)
###Output
[-1, -1, -1, -1, -1]
###Markdown
We can verify that the secret did not leak out by formulating appropriate assertions:
###Code
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
_____no_output_____
###Markdown
All assertions pass, again confirming that no secret leaked out. Let us now go and exploit `heartbeat()` to reveal its secrets. As `heartbeat()` is unchanged, it is as vulnerable as it was:
###Code
hello_s = heartbeat('hello', 32, memory=secret)
hello_s
###Output
_____no_output_____
###Markdown
Now, however, the reply _does_ contain secret information:
###Code
assert isinstance(hello_s, ostr)
print(hello_s.origin)
with ExpectError():
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
with ExpectError():
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
with ExpectError():
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_13168/1577803914.py", line 2, in <module>
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
AssertionError (expected)
###Markdown
We can now integrate these assertions into the `heartbeat()` function, causing it to fail before leaking information. Additionally (or alternatively?), we can also rewrite our output functions not to give out any secret information. We will leave these two exercises for the reader. Taint-Directed FuzzingThe previous _Taint Aware Fuzzing_ was a bit unsatisfactory in that we could not focus on the specific parts of the grammar that led to dangerous operations. We fix that with _taint directed fuzzing_ using `TrackingDB`.The idea here is to track the origins of each character that reaches `eval`. Then, track it back to the grammar nodes that generated it, and increase the probability of using those nodes again. TrackingDBThe `TrackingDB` is similar to `TaintedDB`. The difference is that, if we find that the execution has reached the `my_eval`, we simply raise the `Tainted`.
###Code
class TrackingDB(TaintedDB):
def my_eval(self, statement, g, l):
if statement.origin:
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
Next, we need a specially crafted fuzzer that preserves the taints. TaintedGrammarFuzzerWe define a `TaintedGrammarFuzzer` class that ensures that the taints propagate to the derivation tree. This is similar to the `GrammarFuzzer` from the [chapter on grammar fuzzers](GrammarFuzzer.ipynb) except that the origins and taints are preserved.
###Code
import random
from GrammarFuzzer import GrammarFuzzer
from Parser import canonical
class TaintedGrammarFuzzer(GrammarFuzzer):
def __init__(self,
grammar,
start_symbol=START_SYMBOL,
expansion_switch=1,
log=False):
self.tainted_start_symbol = ostr(
start_symbol, origin=[1] * len(start_symbol))
self.expansion_switch = expansion_switch
self.log = log
self.grammar = grammar
self.c_grammar = canonical(grammar)
self.init_tainted_grammar()
def expansion_cost(self, expansion, seen=set()):
symbols = [e for e in expansion if e in self.c_grammar]
if len(symbols) == 0:
return 1
if any(s in seen for s in symbols):
return float('inf')
return sum(self.symbol_cost(s, seen) for s in symbols) + 1
def fuzz_tree(self):
tree = (self.tainted_start_symbol, [])
nt_leaves = [tree]
expansion_trials = 0
while nt_leaves:
idx = random.randint(0, len(nt_leaves) - 1)
key, children = nt_leaves[idx]
expansions = self.ct_grammar[key]
if expansion_trials < self.expansion_switch:
expansion = random.choice(expansions)
else:
costs = [self.expansion_cost(e) for e in expansions]
m = min(costs)
all_min = [i for i, c in enumerate(costs) if c == m]
expansion = expansions[random.choice(all_min)]
new_leaves = [(token, []) for token in expansion]
new_nt_leaves = [e for e in new_leaves if e[0] in self.ct_grammar]
children[:] = new_leaves
nt_leaves[idx:idx + 1] = new_nt_leaves
if self.log:
print("%-40s" % (key + " -> " + str(expansion)))
expansion_trials += 1
return tree
def fuzz(self):
self.derivation_tree = self.fuzz_tree()
return self.tree_to_string(self.derivation_tree)
###Output
_____no_output_____
###Markdown
We use a specially prepared tainted grammar for fuzzing. We mark each individual definition, each individual rule, and each individual token with a separate origin (we chose a token boundary of 10 here, after inspecting the grammar). This allows us to track exactly which parts of the grammar were involved in the operations we are interested in.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def init_tainted_grammar(self):
key_increment, alt_increment, token_increment = 1000, 100, 10
key_origin = key_increment
self.ct_grammar = {}
for key, val in self.c_grammar.items():
key_origin += key_increment
os = []
for v in val:
ts = []
key_origin += alt_increment
for t in v:
nt = ostr(t, origin=key_origin)
key_origin += token_increment
ts.append(nt)
os.append(ts)
self.ct_grammar[key] = os
# a use tracking grammar
self.ctp_grammar = {}
for key, val in self.ct_grammar.items():
self.ctp_grammar[key] = [(v, dict(use=0)) for v in val]
###Output
_____no_output_____
###Markdown
As before, we initialize the `TrackingDB`
###Code
trdb = TrackingDB(db.db)
###Output
_____no_output_____
###Markdown
Finally, we need to ensure that the taints are preserved, when the tree is converted back to a string. For this, we define the `tainted_tree_to_string()`
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def tree_to_string(self, tree):
symbol, children, *_ = tree
e = ostr('')
if children:
return e.join([self.tree_to_string(c) for c in children])
else:
return e if symbol in self.c_grammar else symbol
###Output
_____no_output_____
###Markdown
We define `update_grammar()` that accepts a set of origins that reached the dangerous operations and the derivation tree of the original string used for fuzzing to update the enhanced grammar.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def update_grammar(self, origin, dtree):
def update_tree(dtree, origin):
key, children = dtree
if children:
updated_children = [update_tree(c, origin) for c in children]
corigin = set.union(
*[o for (key, children, o) in updated_children])
corigin = corigin.union(set(key.origin))
return (key, children, corigin)
else:
my_origin = set(key.origin).intersection(origin)
return (key, [], my_origin)
key, children, oset = update_tree(dtree, set(origin))
for key, alts in self.ctp_grammar.items():
for alt, o in alts:
alt_origins = set([i for token in alt for i in token.origin])
if alt_origins.intersection(oset):
o['use'] += 1
###Output
_____no_output_____
###Markdown
With these, we are now ready to fuzz.
###Code
def tree_type(tree):
key, children = tree
return (type(key), key, [tree_type(c) for c in children])
tgf = TaintedGrammarFuzzer(INVENTORY_GRAMMAR_F)
x = None
for _ in range(10):
qtree = tgf.fuzz_tree()
query = tgf.tree_to_string(qtree)
assert isinstance(query, ostr)
try:
print(repr(query))
res = trdb.sql(query)
print(repr(res))
except SQLException as e:
print(e)
except Tainted as e:
print(e)
origin = e.args[0].origin
tgf.update_grammar(origin, qtree)
except:
traceback.print_exc()
break
print()
###Output
'select (g!=(9)!=((:)==2==9)!=J)==-7 from inventory'
Tainted[((g!=(9)!=((:)==2==9)!=J)==-7)]
'delete from inventory where ((c)==T)!=5==(8!=Y)!=-5'
Tainted[((c)==T)!=5==(8!=Y)!=-5]
'select (((w==(((X!=------8)))))) from inventory'
Tainted[((((w==(((X!=------8)))))))]
'delete from inventory where ((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))'
Tainted[((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))]
'delete from inventory where ((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))'
Tainted[((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))]
'delete from inventory where o!=2==---5==3!=t'
Tainted[o!=2==---5==3!=t]
'select (2) from inventory'
Tainted[((2))]
'select _ from inventory'
Tainted[(_)]
'select L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)) from inventory'
Tainted[(L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)))]
'delete from inventory where _==(7==(9)!=(---5)==1)==-8'
Tainted[_==(7==(9)!=(---5)==1)==-8]
###Markdown
We can now inspect our enhanced grammar to see how many times each rule was used.
###Code
tgf.ctp_grammar
###Output
_____no_output_____
###Markdown
From here, the idea is to focus on the rules that reached dangerous operations more often, and increase the probability of the values of that kind. The Limits of Taint TrackingWhile our framework can detect information leakage, it is by no means perfect. There are several ways in which taints can get lost and information thus may still leak out. ConversionsWe only track taints and origins through _strings_ and _characters_. If we convert these to numbers (or other data), the information is lost. As an example, consider this function, converting individual characters to numbers and back:
###Code
def strip_all_info(s):
t = ""
for c in s:
t += chr(ord(c))
return t
othello = ostr("Secret")
othello
othello.origin # type: ignore
###Output
_____no_output_____
###Markdown
The taints and origins will not propagate through the number conversion:
###Code
thello_stripped = strip_all_info(thello)
thello_stripped
with ExpectError():
thello_stripped.origin
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_13168/588526133.py", line 2, in <module>
thello_stripped.origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
This issue could be addressed by extending numbers with taints and origins, just as we did for strings. At some point, however, this will still break down, because as soon as an internal C function in the Python library is reached, the taint will not propagate into and across the C function. (Unless one starts implementing dynamic taints for these, that is.) Internal C libraries As we mentioned before, calls to _internal_ C libraries do not propagate taints. For example, while the following preserves the taints,
###Code
hello = ostr('hello', origin=100)
world = ostr('world', origin=200)
(hello + ' ' + world).origin
###Output
_____no_output_____
###Markdown
a call to a `join` that should be equivalent will fail.
###Code
with ExpectError():
''.join([hello, ' ', world]).origin # type: ignore
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_13168/2341342688.py", line 2, in <module>
''.join([hello, ' ', world]).origin # type: ignore
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
Implicit Information FlowEven if one could taint all data in a program, there still would be means to break information flow – notably by turning explicit flow into _implicit_ flow, or data flow into _control flow_. Here is an example:
###Code
def strip_all_info_again(s):
t = ""
for c in s:
if c == 'a':
t += 'a'
elif c == 'b':
t += 'b'
elif c == 'c':
t += 'c'
...
###Output
_____no_output_____
###Markdown
With such a function, there is no explicit data flow between the characters in `s` and the characters in `t`; yet, the strings would be identical. This problem frequently occurs in programs that process and manipulate external input. Enforcing TaintingBoth, conversions and implicit information flow are one of several possibilities how taint and origin information get lost. To address the problem, the best solution is to _always assume the worst from untainted strings_:* As it comes to trust, an untainted string should be treated as _possibly untrusted_, and hence not relied upon unless sanitized.* As it comes to privacy, an untainted string should be treated as _possibly secret_, and hence not leaked out.As a consequence, your program should always have two kinds of taints: one for explicitly trusted (or secret) and one for explicitly untrusted (or non-secret). If a taint gets lost along the way, you will may have to restore it from its sources – not unlike the string methods discussed above. The benefit is a trusted application, in which each and every information flow can be checked at runtime, with violations quickly discovered through automated tests. Synopsis This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:
###Code
thello[:4]
###Output
_____no_output_____
###Markdown
However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:
###Code
thello.taint
###Output
_____no_output_____
###Markdown
The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:
###Code
thello[1:2].taint # type: ignore
###Output
_____no_output_____
###Markdown
`tstr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
from ClassDiagram import display_class_hierarchy
display_class_hierarchy(tstr)
###Output
_____no_output_____
###Markdown
Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:
###Code
secret = ostr("joshua1234", origin=100, taint='SECRET')
###Output
_____no_output_____
###Markdown
The `origin` attribute of an `ostr` provides access to a list of indexes:
###Code
secret.origin
secret.taint
###Output
_____no_output_____
###Markdown
`ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:
###Code
secret_substr = (secret[0:4] + "-" + secret[6:])
secret_substr.taint
secret_substr.origin
###Output
_____no_output_____
###Markdown
`ostr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
display_class_hierarchy(ostr)
###Output
_____no_output_____
###Markdown
Lessons Learned* String-based and character-based taints allow to dynamically track the information flow from input to the internals of a system and back to the output.* Checking taints allows to discover untrusted inputs and information leakage at runtime.* Data conversions and implicit data flow may strip taint information; the resulting untainted strings should be treated as having the worst possible taint.* Taints can be used in conjunction with fuzzing to provide a more robust indication of incorrect behavior than to simply rely on program crashes. Next StepsAn even better alternative to our taint-directed fuzzing is to make use of _symbolic_ techniques that take the semantics of the program under test into account. The chapter on [flow fuzzing](FlowFuzzer.ipynb) introduces these symbolic techniques for the purpose of exploring information flows; the subsequent chapter on [symbolic fuzzing](SymbolicFuzzer.ipynb) then shows how to make full-fledged use of symbolic execution for covering code. Similarly, [search based fuzzing](SearchBasedFuzzer.ipynb) can often provide a cheaper exploration strategy. BackgroundTaint analysis on Python using a library approach as we implemented in this chapter was discussed by Conti et al. \cite{Conti2010}. Exercises Exercise 1: Tainted NumbersIntroduce a class `tint` (for tainted integer) that, like `tstr`, has a taint attribute that gets passed on from `tint` to `tint`. Part 1: CreationImplement the `tint` class such that taints are set:```pythonx = tint(42, taint='SECRET')assert x.taint == 'SECRET'``` **Solution.** This is pretty straightforward, as we can apply the same scheme as for `tstr`:
###Code
class tint(int):
def __new__(cls, value, *args, **kw):
return int.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
x = tint(42, taint='SECRET')
assert x.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 2: Arithmetic expressionsEnsure that taints get passed along arithmetic expressions; support addition, subtraction, multiplication, and division operators.```pythony = x + 1assert y.taint == 'SECRET'``` **Solution.** As with `tstr`, we implement a `create()` method and a convenience function to quickly define all arithmetic operations:
###Code
class tint(tint):
def create(self, n):
# print("New tint from", n)
return tint(n, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_int_wrapper()` function creates a wrapper around an existing `int` method which attaches the taint to the result of the method:
###Code
def make_int_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
# print(fun, args, kwargs, "=", repr(res))
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all arithmetic operators:
###Code
for name in ['__add__', '__radd__', '__mul__', '__rmul__', '__sub__',
'__floordiv__', '__truediv__']:
fun = getattr(int, name)
setattr(tint, name, make_int_wrapper(fun))
x = tint(42, taint='SECRET')
y = x + 1
y.taint # type: ignore
###Output
_____no_output_____
###Markdown
Part 3: Passing taints from integers to stringsConverting a tainted integer into a string (using `repr()`) should yield a tainted string:```pythonx_s = repr(x)assert x_s.taint == 'SECRET'``` **Solution.** We define the string conversion functions such that they return a tainted string (`tstr`):
###Code
class tint(tint):
def __repr__(self) -> tstr:
s = int.__repr__(self)
return tstr(s, taint=self.taint)
class tint(tint):
def __str__(self) -> tstr:
return tstr(int.__str__(self), taint=self.taint)
x = tint(42, taint='SECRET')
x_s = repr(x)
assert isinstance(x_s, tstr)
assert x_s.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 4: Passing taints from strings to integersConverting a tainted object (with a `taint` attribute) to an integer should pass that taint:```pythonpassword = tstr('1234', taint='NOT_EXACTLY_SECRET')x = tint(password)assert x == 1234assert x.taint == 'NOT_EXACTLY_SECRET'``` **Solution.** This can be done by having the `__init__()` constructor check for a `taint` attibute:
###Code
class tint(tint):
def __init__(self, value, taint=None, **kwargs):
if taint is not None:
self.taint = taint
else:
self.taint = getattr(value, 'taint', None)
password = tstr('1234', taint='NOT_EXACTLY_SECRET')
x = tint(password)
assert x == 1234
assert x.taint == 'NOT_EXACTLY_SECRET'
###Output
_____no_output_____
###Markdown
Tracking Information FlowWe have explored how one could generate better inputs that can penetrate deeper into the program in question. While doing so, we have relied on program crashes to tell us that we have succeeded in finding problems in the program. However, that is rather simplistic. What if the behavior of the program is simply incorrect, but does not lead to a crash? Can one do better?In this chapter, we explore in depth how to track information flows in Python, and how these flows can be used to determine whether a program behaved as expected. **Prerequisites*** You should have read the [chapter on coverage](Coverage.ipynb).* You should have read the [chapter on probabilistic fuzzing](ProbabilisticGrammarFuzzer.ipynb). We first set up our infrastructure so that we can make use of previously defined functions.
###Code
import bookutils
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.InformationFlow import ```and then make use of the following features.This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string.For tracking information on security properties, use `tstr` as follows:```python>>> thello = tstr('hello', taint='LOW')```Now, any operation from `thello` that results in a string fragment would include the correct taint. For example:```python>>> thello[1:2].taint'LOW'```For tracking the originating indexes from the input string, use `ostr` as follows:```python>>> ohw = ostr("hello\tworld", origin=100)```The originating indexes can be recovered as follows:```python>>> (ohw[0:4] +"-"+ ohw[6:]).origin[100, 101, 102, 103, -1, 106, 107, 108, 109, 110]``` A Vulnerable DatabaseSay we want to implement an *in-memory database* service in Python. Here is a rather flimsy attempt. We use the following dataset.
###Code
INVENTORY = """\
1997,van,Ford,E350
2000,car,Mercury,Cougar
1999,car,Chevy,Venture\
"""
VEHICLES = INVENTORY.split('\n')
###Output
_____no_output_____
###Markdown
Our DB is a Python class that parses its arguments and throws `SQLException` which is defined below.
###Code
class SQLException(Exception):
pass
###Output
_____no_output_____
###Markdown
The database is simply a Python `dict` that is exposed only through SQL queries.
###Code
class DB:
def __init__(self, db={}):
self.db = dict(db)
###Output
_____no_output_____
###Markdown
Representing TablesThe database contains tables, which are created by a method call `create_table()`. Each table data structure is a pair of values. The first one is the meta data containing column names and types. The second value is a list of values in the table.
###Code
class DB(DB):
def create_table(self, table, defs):
self.db[table] = (defs, [])
###Output
_____no_output_____
###Markdown
The table can be retrieved using the name using the `table()` method call.
###Code
class DB(DB):
def table(self, t_name):
if t_name in self.db:
return self.db[t_name]
raise SQLException('Table (%s) was not found' % repr(t_name))
###Output
_____no_output_____
###Markdown
Here is an example of how to use both. We fill a table `inventory` with four columns: `year`, `kind`, `company`, and `model`. Initially, our table is empty.
###Code
def sample_db():
db = DB()
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
return db
###Output
_____no_output_____
###Markdown
Using `table()`, we can retrieve the table definition as well as its contents.
###Code
db = sample_db()
db.table('inventory')
###Output
_____no_output_____
###Markdown
We also define `column()` for retrieving the column definition from a table declaration.
###Code
class DB(DB):
def column(self, table_decl, c_name):
if c_name in table_decl:
return table_decl[c_name]
raise SQLException('Column (%s) was not found' % repr(c_name))
db = sample_db()
decl, rows = db.table('inventory')
db.column(decl, 'year')
###Output
_____no_output_____
###Markdown
Executing SQL StatementsThe `sql()` method of `DB` executes SQL statements. It inspects its arguments, and dispatches the query based on the kind of SQL statement to be executed.
###Code
class DB(DB):
def do_select(self, query):
assert False
def do_update(self, query):
assert False
def do_insert(self, query):
assert False
def do_delete(self, query):
assert False
def sql(self, query):
methods = [('select ', self.do_select),
('update ', self.do_update),
('insert into ', self.do_insert),
('delete from', self.do_delete)]
for key, method in methods:
if query.startswith(key):
return method(query[len(key):])
raise SQLException('Unknown SQL (%s)' % query)
###Output
_____no_output_____
###Markdown
At this point, the individual methods for handling SQL statements are not yet defined. Let us do this in the next steps. Selecting DataThe `do_select()` method handles SQL `select` statements to retrieve data from a table.
###Code
class DB(DB):
def do_select(self, query):
FROM, WHERE = ' from ', ' where '
table_start = query.find(FROM)
if table_start < 0:
raise SQLException('no table specified')
where_start = query.find(WHERE)
select = query[:table_start]
if where_start >= 0:
t_name = query[table_start + len(FROM):where_start]
where = query[where_start + len(WHERE):]
else:
t_name = query[table_start + len(FROM):]
where = ''
_, table = self.table(t_name)
if where:
selected = self.expression_clause(table, "(%s)" % where)
selected_rows = [hm for i, data, hm in selected if data]
else:
selected_rows = table
rows = self.expression_clause(selected_rows, "(%s)" % select)
return [data for i, data, hm in rows]
###Output
_____no_output_____
###Markdown
The `expression_clause()` method is used for two purposes:1. In the form `select` $x$, $y$, $z$ `from` $t$, it _evaluates_ (and returns) the expressions $x$, $y$, $z$ in the contexts of the selected rows.2. If a clause `where` $p$ is given, it also evaluates $p$ in the context of the rows and includes the rows in the selection only if $p$ holds.To evaluate expressions like $x$, $y$, $z$ or $p$, we make use of the Python evaluation function.
###Code
class DB(DB):
def expression_clause(self, table, statement):
selected = []
for i, hm in enumerate(table):
selected.append((i, self.my_eval(statement, {}, hm), hm))
return selected
###Output
_____no_output_____
###Markdown
Which internally calls `my_eval()` to evaluate any given statement.
###Code
class DB(DB):
def my_eval(self, statement, g, l):
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid WHERE (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
**Note:** Using `eval()` here introduces some important security issues, which we will discuss later in this chapter. Here's how we can use `sql()` to issue a query. Note that the table is yet empty.
###Code
db = sample_db()
db.sql('select year from inventory')
db = sample_db()
db.sql('select year from inventory where year == 2018')
###Output
_____no_output_____
###Markdown
Inserting DataThe `do_insert()` method handles SQL `insert` statements.
###Code
class DB(DB):
def do_insert(self, query):
VALUES = ' values '
table_end = query.find('(')
t_name = query[:table_end].strip()
names_end = query.find(')')
decls, table = self.table(t_name)
names = [i.strip() for i in query[table_end + 1:names_end].split(',')]
# verify columns exist
for k in names:
self.column(decls, k)
values_start = query.find(VALUES)
if values_start < 0:
raise SQLException('Invalid INSERT (%s)' % repr(query))
values = [
i.strip() for i in query[values_start + len(VALUES) + 1:-1].split(',')
]
if len(names) != len(values):
raise SQLException(
'names(%s) != values(%s)' % (repr(names), repr(values)))
# dict lookups happen in C code, so we cant use that
kvs = {}
for k,v in zip(names, values):
for key,kval in decls.items():
if k == key:
kvs[key] = self.convert(kval, v)
table.append(kvs)
###Output
_____no_output_____
###Markdown
In SQL, a column can come in any supported data type. To ensure it is stored using the type originally declared, we need the ability to convert the values to specific types which is provided by `convert()`.
###Code
import ast
class DB(DB):
def convert(self, cast, value):
try:
return cast(ast.literal_eval(value))
except:
raise SQLException('Invalid Conversion %s(%s)' % (cast, value))
###Output
_____no_output_____
###Markdown
Here is an example of how to use the SQL `insert` command:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.table('inventory')
###Output
_____no_output_____
###Markdown
With the database filled, we can also run more complex queries:
###Code
db.sql('select year + 1, kind from inventory')
db.sql('select year, kind from inventory where year == 1997')
###Output
_____no_output_____
###Markdown
Updating DataSimilarly, `do_update()` handles SQL `update` statements.
###Code
class DB(DB):
def do_update(self, query):
SET, WHERE = ' set ', ' where '
table_end = query.find(SET)
if table_end < 0:
raise SQLException('Invalid UPDATE (%s)' % repr(query))
set_end = table_end + 5
t_name = query[:table_end]
decls, table = self.table(t_name)
names_end = query.find(WHERE)
if names_end >= 0:
names = query[set_end:names_end]
where = query[names_end + len(WHERE):]
else:
names = query[set_end:]
where = ''
sets = [[i.strip() for i in name.split('=')]
for name in names.split(',')]
# verify columns exist
for k, v in sets:
self.column(decls, k)
if where:
selected = self.expression_clause(table, "(%s)" % where)
updated = [hm for i, d, hm in selected if d]
else:
updated = table
for hm in updated:
for k, v in sets:
# we can not do dict lookups because it is implemetned in C.
for key, kval in decls.items():
if key == k:
hm[key] = self.convert(kval, v)
return "%d records were updated" % len(updated)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can update things:
###Code
db.sql('update inventory set year = 1998 where year == 1997')
db.sql('select year from inventory')
db.table('inventory')
###Output
_____no_output_____
###Markdown
Deleting DataFinally, SQL `delete` statements are handled by `do_delete()`.
###Code
class DB(DB):
def do_delete(self, query):
WHERE = ' where '
table_end = query.find(WHERE)
if table_end < 0:
raise SQLException('Invalid DELETE (%s)' % query)
t_name = query[:table_end].strip()
_, table = self.table(t_name)
where = query[table_end + len(WHERE):]
selected = self.expression_clause(table, "%s" % where)
deleted = [i for i, d, hm in selected if d]
for i in sorted(deleted, reverse=True):
del table[i]
return "%d records were deleted" % len(deleted)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can delete data:
###Code
db.sql('delete from inventory where company == "Ford"')
###Output
_____no_output_____
###Markdown
Our database is now empty:
###Code
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
All Methods TogetherHere is how our database can be used.
###Code
db = DB()
###Output
_____no_output_____
###Markdown
Again, we first create a table in our database with the correct data types.
###Code
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
###Output
_____no_output_____
###Markdown
Here is a simple convenience function to update the table using our dataset.
###Code
def update_inventory(sqldb, vehicle):
inventory_def = sqldb.db['inventory'][0]
k, v = zip(*inventory_def.items())
val = [repr(cast(val)) for cast, val in zip(v, vehicle.split(','))]
sqldb.sql('insert into inventory (%s) values (%s)' % (','.join(k),
','.join(val)))
for V in VEHICLES:
update_inventory(db, V)
###Output
_____no_output_____
###Markdown
Our database now contains the same dataset as `VEHICLES` under `INVENTORY` table.
###Code
db.db
###Output
_____no_output_____
###Markdown
Here is a sample select statement.
###Code
db.sql('select year,kind from inventory')
db.sql("select company,model from inventory where kind == 'car'")
###Output
_____no_output_____
###Markdown
We can run updates on it.
###Code
db.sql("update inventory set year = 1998, company = 'Suzuki' where kind == 'van'")
db.db
###Output
_____no_output_____
###Markdown
It can even do mathematics on the fly!
###Code
db.sql('select int(year)+10 from inventory')
###Output
_____no_output_____
###Markdown
Adding a new row to our table.
###Code
db.sql("insert into inventory (year, kind, company, model) values (1, 'charriot', 'Rome', 'Quadriga')")
db.db
###Output
_____no_output_____
###Markdown
Which we then delete.
###Code
db.sql("delete from inventory where year < 1900")
###Output
_____no_output_____
###Markdown
Fuzzing SQLTo verify that everything is OK, let us fuzz. First we define our grammar.
###Code
import string
EXPR_GRAMMAR = {
"<start>": ["<expr>"],
"<expr>": ["<bexpr>", "<aexpr>", "(<expr>)", "<term>"],
"<bexpr>": [
"<aexpr><lt><aexpr>",
"<aexpr><gt><aexpr>",
"<expr>==<expr>",
"<expr>!=<expr>",
],
"<aexpr>": [
"<aexpr>+<aexpr>", "<aexpr>-<aexpr>", "<aexpr>*<aexpr>",
"<aexpr>/<aexpr>", "<word>(<exprs>)", "<expr>"
],
"<exprs>": ["<expr>,<exprs>", "<expr>"],
"<lt>": ["<"],
"<gt>": [">"],
"<term>": ["<number>", "<word>"],
"<number>": ["<integer>.<integer>", "<integer>", "-<number>"],
"<integer>": ["<digit><integer>", "<digit>"],
"<word>": ["<word><letter>", "<word><digit>", "<letter>"],
"<digit>":
list(string.digits),
"<letter>":
list(string.ascii_letters + '_:.')
}
INVENTORY_GRAMMAR = dict(
EXPR_GRAMMAR, **{
'<start>': ['<query>'],
'<query>': [
'select <exprs> from <table>',
'select <exprs> from <table> where <bexpr>',
'insert into <table> (<names>) values (<literals>)',
'update <table> set <assignments> where <bexpr>',
'delete from <table> where <bexpr>',
],
'<table>': ['<word>'],
'<names>': ['<column>,<names>', '<column>'],
'<column>': ['<word>'],
'<literals>': ['<literal>', '<literal>,<literals>'],
'<literal>': ['<number>', "'<chars>'"],
'<assignments>': ['<kvp>,<assignments>', '<kvp>'],
'<kvp>': ['<column>=<value>'],
'<value>': ['<word>'],
'<chars>': ['<char>', '<char><chars>'],
'<char>':
[i for i in string.printable if i not in "<>'\"\t\n\r\x0b\x0c\x00"
] + ['<lt>', '<gt>'],
})
###Output
_____no_output_____
###Markdown
As can be seen from the source of our database, the functions always check whether the table name is correct. Hence, we modify the grammar to choose our particular table so that it will have a better chance of reaching deeper. We will see in the later sections how this can be done automatically.
###Code
INVENTORY_GRAMMAR_F = dict(INVENTORY_GRAMMAR, **{'<table>': ['inventory']})
from GrammarFuzzer import GrammarFuzzer
gf = GrammarFuzzer(INVENTORY_GRAMMAR_F)
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = db.sql(query)
print(repr(res))
except SQLException as e:
print("> ", e)
pass
except:
traceback.print_exc()
break
print()
###Output
'select O6fo,-977091.1,-36.46 from inventory'
> Invalid WHERE ('(O6fo,-977091.1,-36.46)')
'select g3 from inventory where -3.0!=V/g/b+Q*M*G'
> Invalid WHERE ('(-3.0!=V/g/b+Q*M*G)')
'update inventory set z=a,x=F_,Q=K where p(M)<_*S'
> Column ('z') was not found
'update inventory set R=L5pk where e*l*y-u>K+U(:)'
> Column ('R') was not found
'select _/d*Q+H/d(k)<t+M-A+P from inventory'
> Invalid WHERE ('(_/d*Q+H/d(k)<t+M-A+P)')
'select F5 from inventory'
> Invalid WHERE ('(F5)')
'update inventory set jWh.=a6 where wcY(M)>IB7(i)'
> Column ('jWh.') was not found
'update inventory set U=y where L(W<c,(U!=W))<V(((q)==m<F),O,l)'
> Column ('U') was not found
'delete from inventory where M/b-O*h*E<H-W>e(Y)-P'
> Invalid WHERE ('M/b-O*h*E<H-W>e(Y)-P')
'select ((kP(86)+b*S+J/Z/U+i(U))) from inventory'
> Invalid WHERE ('(((kP(86)+b*S+J/Z/U+i(U))))')
###Markdown
Fuzzing does not seem to have triggered any crashes. However, are crashes the only errors that we should be worried about? The Evil of EvalIn our implementation, we have made use of `eval()` to evaluate expressions using the Python interpreter. This allows us to unleash the full power of Python expressions within our SQL statements.
###Code
db.sql('select year from inventory where year < 2000')
###Output
_____no_output_____
###Markdown
In the above query, the clause `year < 2000` is evaluated using `expression_clause()` using Python in the context of each row; hence, `year < 2000` evaluates to either `True` or `False`. The same holds for the expressions being `select`ed:
###Code
db.sql('select year - 1900 if year < 2000 else year - 2000 from inventory')
###Output
_____no_output_____
###Markdown
This works because `year - 1900 if year < 2000 else year - 2000` is a valid Python expression. (It is not a valid SQL expression, though.) The problem with the above is that there is _no limitation_ to what the Python expression can do. What if the user tries the following?
###Code
db.sql('select __import__("os").popen("pwd").read() from inventory')
###Output
_____no_output_____
###Markdown
The above statement effectively reads from the users' file system. Instead of `os.popen("pwd").read()`, it could execute arbitrary Python commands – to access data, install software, run a background process. This is where "the full power of Python expressions" turns back on us. What we want is to allow our _program_ to make full use of its power; yet, the _user_ (or any third party) should not be entrusted to do the same. Hence, we need to differentiate between (trusted) _input from the program_ and (untrusted) _input from the user_. One method that allows such differentiation is that of *dynamic taint analysis*. The idea is to identify the functions that accept user input as *sources* that *taint* any string that comes in through them, and those functions that perform dangerous operations as *sinks*. Finally we bless certain functions as *taint sanitizers*. The idea is that an input from the source should never reach the sink without undergoing sanitization first. This allows us to use a stronger oracle than simply checking for crashes. Tracking String TaintsThere are various levels of taint tracking that one can perform. The simplest is to track that a string fragment originated in a specific environment, and has not undergone a taint removal process. For this, we simply need to wrap the original string with an environment identifier (the _taint_) with `tstr`, and produce `tstr` instances on each operation that results in another string fragment. The attribute `taint` holds a label identifying the environment this instance was derived. A Class for Tainted StringsFor capturing information flows we need a new string class. The idea is to use the new tainted string class `tstr` as a wrapper on the original `str` class. However, `str` is an *immutable* class. Hence, it does not call its `__init__()` method after being constructed. This means that any subclasses of `str` also will not get the `__init__()` method called. If we want to get our initialization routine called, we need to [hook into `__new__()`](https://docs.python.org/3/reference/datamodel.htmlbasic-customization) and return an instance of our own class. We combine this with our initialization code in `__init__()`.
###Code
class tstr(str):
def __new__(cls, value, *args, **kw):
return str.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
class tstr(tstr):
def __repr__(self):
return tstr(str.__repr__(self), taint=self.taint)
class tstr(tstr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
For example, if we wrap `"hello"` in `tstr`, then we should be able to access its taint:
###Code
thello = tstr('hello', taint='LOW')
thello.taint
repr(thello).taint
###Output
_____no_output_____
###Markdown
By default, when we wrap a string, it is tainted. Hence we also need a way to clear the taint in the string. One way is to simply return a `str` instance as above. However, one may sometimes wish to remove the taint from an existing instance. This is accomplished with `clear_taint()`. During `clear_taint()`, we simply set the taint to `None`. This method comes with a pair method `has_taint()` which checks whether a `tstr` instance is currently origined.
###Code
class tstr(tstr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
###Output
_____no_output_____
###Markdown
String OperatorsTo propagate the taint, we have to extend string functions, such as operators. We can do so in one single big step, overloading all string methods and operators. When we create a new string from an existing tainted string, we propagate its taint.
###Code
class tstr(tstr):
def create(self, s):
return tstr(s, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_str_wrapper()` function creates a wrapper around an existing string method which attaches the taint to the result of the method:
###Code
def make_str_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all string methods that return a string:
###Code
def informationflow_init_1():
for name in ['__format__', '__mod__', '__rmod__', '__getitem__', '__add__', '__mul__', '__rmul__',
'capitalize', 'casefold', 'center', 'encode',
'expandtabs', 'format', 'format_map', 'join', 'ljust', 'lower', 'lstrip', 'replace',
'rjust', 'rstrip', 'strip', 'swapcase', 'title', 'translate', 'upper']:
fun = getattr(str, name)
setattr(tstr, name, make_str_wrapper(fun))
informationflow_init_1()
INITIALIZER_LIST = [informationflow_init_1]
def initialize():
for fn in INITIALIZER_LIST:
fn()
###Output
_____no_output_____
###Markdown
The one missing operator is `+` with a regular string on the left side and a tainted string on the right side. Python supports a `__radd__()` method which is invoked if the associated object is used on the right side of an addition.
###Code
class tstr(tstr):
def __radd__(self, s):
return self.create(s + str(self))
###Output
_____no_output_____
###Markdown
With this, we are already done. Let us create a string `thello` with a taint `LOW`.
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any substring will also be tainted:
###Code
thello[0].taint
thello[1:3].taint
###Output
_____no_output_____
###Markdown
String additions will return a `tstr` object with the taint:
###Code
(tstr('foo', taint='HIGH') + 'bar').taint
###Output
_____no_output_____
###Markdown
Our `__radd__()` method ensures this also works if the `tstr` occurs on the right side of a string addition:
###Code
('foo' + tstr('bar', taint='HIGH')).taint
thello += ', world'
thello.taint
###Output
_____no_output_____
###Markdown
Other operators such as multiplication also work:
###Code
(thello * 5).taint
('hw %s' % thello).taint
(tstr('hello %s', taint='HIGH') % 'world').taint
import string
###Output
_____no_output_____
###Markdown
Tracking Untrusted InputSo, what can one do with tainted strings? We reconsider the `DB` example. We define a "better" `TrustedDB` which only accepts strings tainted as `"TRUSTED"`.
###Code
class TrustedDB(DB):
def sql(self, s):
assert isinstance(s, tstr), "Need a tainted string"
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
return super().sql(s)
###Output
_____no_output_____
###Markdown
Feeding a string with an "unknown" (i.e., non-existing) trust level will cause `TrustedDB` to fail:
###Code
bdb = TrustedDB(db.db)
from ExpectError import ExpectError
with ExpectError():
bdb.sql("select year from INVENTORY")
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_18538/3935989889.py", line 2, in <module>
bdb.sql("select year from INVENTORY")
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_18538/995123203.py", line 3, in sql
assert isinstance(s, tstr), "Need a tainted string"
AssertionError: Need a tainted string (expected)
###Markdown
Additionally any user input would be originally tagged with `"UNTRUSTED"` as taint. If we place an untrusted string into our better calculator, it will also fail:
###Code
bad_user_input = tstr('__import__("os").popen("ls").read()', taint='UNTRUSTED')
with ExpectError():
bdb.sql(bad_user_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_18538/3307042773.py", line 3, in <module>
bdb.sql(bad_user_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_18538/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
Hence, somewhere along the computation, we have to turn the "untrusted" inputs into "trusted" strings. This process is called *sanitization*. A simple sanitization function for our purposes could ensure that the input consists only of few allowed characters (not including letters or quotes); if this is the case, then the input gets a new `"TRUSTED"` taint. If not, we turn the string into an (untrusted) empty string; other alternatives would be to raise an error or to escape or delete "untrusted" characters.
###Code
import re
def sanitize(user_input):
assert isinstance(user_input, tstr)
if re.match(
r'^select +[-a-zA-Z0-9_, ()]+ from +[-a-zA-Z0-9_, ()]+$', user_input):
return tstr(user_input, taint='TRUSTED')
else:
return tstr('', taint='UNTRUSTED')
good_user_input = tstr("select year,model from inventory", taint='UNTRUSTED')
sanitized_input = sanitize(good_user_input)
sanitized_input
sanitized_input.taint
bdb.sql(sanitized_input)
###Output
_____no_output_____
###Markdown
Let us now try out our untrusted input:
###Code
sanitized_input = sanitize(bad_user_input)
sanitized_input
sanitized_input.taint
with ExpectError():
bdb.sql(sanitized_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_18538/249000876.py", line 2, in <module>
bdb.sql(sanitized_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_18538/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
In a similar fashion, we can prevent SQL and code injections discussed in [the chapter on Web fuzzing](WebFuzzer.ipynb). Taint Aware FuzzingWe can also use tainting to _direct fuzzing to those grammar rules that are likely to generate dangerous inputs._ The idea here is to identify inputs generated by our fuzzer that lead to untrusted execution. First we define the exception to be thrown when a tainted value reaches a dangerous operation.
###Code
class Tainted(Exception):
def __init__(self, v):
self.v = v
def __str__(self):
return 'Tainted[%s]' % self.v
###Output
_____no_output_____
###Markdown
TaintedDBNext, since `my_eval()` is the most dangerous operation in the `DB` class, we define a new class `TaintedDB` that overrides the `my_eval()` to throw an exception whenever an untrusted string reaches this part.
###Code
class TaintedDB(DB):
def my_eval(self, statement, g, l):
if statement.taint != 'TRUSTED':
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
We initialize an instance of `TaintedDB`
###Code
tdb = TaintedDB()
tdb.db = db.db
###Output
_____no_output_____
###Markdown
Then we start fuzzing.
###Code
import traceback
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = tdb.sql(tstr(query, taint='UNTRUSTED'))
print(repr(res))
except SQLException as e:
pass
except Tainted as e:
print("> ", e)
except:
traceback.print_exc()
break
print()
###Output
'delete from inventory where y/u-l+f/y<Y(c)/A-H*q'
> Tainted[y/u-l+f/y<Y(c)/A-H*q]
"insert into inventory (G,Wmp,sl3hku3) values ('<','?')"
"insert into inventory (d0) values (',_G')"
'select P*Q-w/x from inventory where X<j==:==j*r-f'
> Tainted[(X<j==:==j*r-f)]
'select a>F*i from inventory where Q/I-_+P*j>.'
> Tainted[(Q/I-_+P*j>.)]
'select (V-i<T/g) from inventory where T/r/G<FK(m)/(i)'
> Tainted[(T/r/G<FK(m)/(i))]
'select (((i))),_(S,_)/L-k<H(Sv,R,n,W,Y) from inventory'
> Tainted[((((i))),_(S,_)/L-k<H(Sv,R,n,W,Y))]
'select (N==c*U/P/y),i-e/n*y,T!=w,u from inventory'
> Tainted[((N==c*U/P/y),i-e/n*y,T!=w,u)]
'update inventory set _=B,n=v where o-p*k-J>T'
'select s from inventory where w4g4<.m(_)/_>t'
> Tainted[(w4g4<.m(_)/_>t)]
###Markdown
One can see that `insert`, `update`, `select` and `delete` statements on an existing table lead to taint exceptions. We can now focus on these specific kinds of inputs. However, this is not the only thing we can do. We will see how we can identify specific portions of input that reached tainted execution using character origins in the later sections. But before that, we explore other uses of taints. Preventing Privacy LeaksUsing taints, we can also ensure that secret information does not leak out. We can assign a special taint `"SECRET"` to strings whose information must not leak out:
###Code
secrets = tstr('<Plenty of secret keys>', taint='SECRET')
###Output
_____no_output_____
###Markdown
Accessing any substring of `secrets` will propagate the taint:
###Code
secrets[1:3].taint
###Output
_____no_output_____
###Markdown
Consider the _heartbeat_ security leak from [the chapter on Fuzzing](Fuzzer.ipynb), in which a server would accidentally reply not only the user input sent to it, but also secret memory. If the reply consists only of the user input, there is no taint associated with it:
###Code
user_input = "hello"
reply = user_input
isinstance(reply, tstr)
###Output
_____no_output_____
###Markdown
If, however, the reply contains _any_ part of the secret, the reply will be tainted:
###Code
reply = user_input + secrets[0:5]
reply
reply.taint
###Output
_____no_output_____
###Markdown
The output function of our server would now ensure that the data sent back does not contain any secret information:
###Code
def send_back(s):
assert not isinstance(s, tstr) and not s.taint == 'SECRET'
...
with ExpectError():
send_back(reply)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_18538/3747050841.py", line 2, in <module>
send_back(reply)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_18538/1061364539.py", line 2, in send_back
assert not isinstance(s, tstr) and not s.taint == 'SECRET'
AssertionError (expected)
###Markdown
Tracking Character OriginsOur `tstr` solution can help to identify information leaks – but it is by no means complete. If we actually take the `heartbeat()` implementation from [the chapter on Fuzzing](Fuzzer.ipynb), we will see that _any_ reply is marked as `SECRET` – even those not even accessing secret memory:
###Code
from Fuzzer import heartbeat
reply = heartbeat('hello', 5, memory=secrets)
reply.taint
###Output
_____no_output_____
###Markdown
Why is this? If we look into the implementation of `heartbeat()`, we will see that it first builds a long string `memory` from the (non-secret) reply and the (secret) memory, before returning the first characters from `memory`.```python Store reply in memory memory = reply + memory[len(reply):]```At this point, the whole memory still is tainted as `SECRET`, _including_ the non-secret part from `reply`. We may be able to circumvent the issue by tagging the `reply` as `PUBLIC` – but then, this taint would be in conflict with the `SECRET` tag of `memory`. What happens if we compose a string from two differently tainted strings?
###Code
thilo = tstr("High", taint='HIGH') + tstr("Low", taint='LOW')
###Output
_____no_output_____
###Markdown
It turns out that in this case, the `__add__()` method takes precedence over the `__radd__()` method, which means that the right-hand `"Low"` string is treated as a regular (non-tainted) string.
###Code
thilo
thilo.taint
###Output
_____no_output_____
###Markdown
We could set up the `__add__()` and other methods with special handling for conflicting taints. However, the way this conflict should be resolved would be highly _application-dependent_:* If we use taints to indicate _privacy levels_, `SECRET` privacy should take precedence over `PUBLIC` privacy. Any combination of a `SECRET`-tainted string and a `PUBLIC`-tainted string thus should have a `SECRET` taint.* If we use taints to indicate _origins_ of information, an `UNTRUSTED` origin should take precedence over a `TRUSTED` origin. Any combination of an `UNTRUSTED`-tainted string and a `TRUSTED`-tainted string thus should have an `UNTRUSTED` taint.Of course, such conflict resolutions can be implemented. But even so, they will not help us in the `heartbeat()` example differentiating secret from non-secret output data. Tracking Individual CharactersFortunately, there is a better, more generic way to solve the above problems. The key to composition of differently tainted strings is to assign taints not only to strings, but actually to every bit of information – in our case, characters. If every character has a taint on its own, a new composition of characters will simply inherit this very taint _per character_. To this end, we introduce a second bit of information named _origin_. Distinguishing various untrusted sources may be accomplished by origining each instance as separate instance (called *colors* in dynamic origin research). You will see an instance of this technique in the chapter on [Grammar Mining](GrammarMiner.ipynb). In this section, we carry *character level* origins. That is, given a fragment that resulted from a portion of the original origined string, one will be able to tell which portion of the input string the fragment was taken from. In essence, each input character index from an origined source gets its own color. More complex origining such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter. A Class for Tracking Character OriginsLet us introduce a class `ostr` which, like `tstr`, carries a taint for each string, and additionally an _origin_ for each character that indicates its source. It is a consecutive number in a particular range (by default, starting with zero) indicating its _position_ within a specific origin.
###Code
class ostr(str):
DEFAULT_ORIGIN = 0
def __new__(cls, value, *args, **kw):
return str.__new__(cls, value)
def __init__(self, value, taint=None, origin=None, **kwargs):
self.taint = taint
if origin is None:
origin = ostr.DEFAULT_ORIGIN
if isinstance(origin, int):
self.origin = list(range(origin, origin + len(self)))
else:
self.origin = origin
assert len(self.origin) == len(self)
class ostr(ostr):
def create(self, s):
return ostr(s, taint=self.taint, origin=self.origin)
class ostr(ostr):
UNKNOWN_ORIGIN = -1
def __repr__(self):
# handle escaped chars
origin = [ostr.UNKNOWN_ORIGIN]
for s, o in zip(str(self), self.origin):
origin.extend([o] * (len(repr(s)) - 2))
origin.append(ostr.UNKNOWN_ORIGIN)
return ostr(str.__repr__(self), taint=self.taint, origin=origin)
class ostr(ostr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
By default, character origins start with `0`:
###Code
thello = ostr('hello')
assert thello.origin == [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
We can also specify the starting origin as below -- `6..10`
###Code
tworld = ostr('world', origin=6)
assert tworld.origin == [6, 7, 8, 9, 10]
a = ostr("hello\tworld")
repr(a).origin
###Output
_____no_output_____
###Markdown
`str()` returns a `str` instance without origin or taint information:
###Code
assert type(str(thello)) == str
###Output
_____no_output_____
###Markdown
`repr()`, however, keeps the origin information for the original string:
###Code
repr(thello)
repr(thello).origin
###Output
_____no_output_____
###Markdown
Just as with taints, we can clear origins and check whether an origin is present:
###Code
class ostr(ostr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
class ostr(ostr):
def clear_origin(self):
self.origin = [self.UNKNOWN_ORIGIN] * len(self)
return self
def has_origin(self):
return any(origin != self.UNKNOWN_ORIGIN for origin in self.origin)
thello = ostr('Hello')
assert thello.has_origin()
thello.clear_origin()
assert not thello.has_origin()
###Output
_____no_output_____
###Markdown
In the remainder of this section, we re-implement various string methods such that they also keep track of origins. If this is too tedious for you, jump right [to the next section](Checking-Origins) which gives a number of usage examples. CreateWe need to create new substrings that are wrapped in `ostr` objects. However, we also want to allow our subclasses to create their own instances. Hence we again provide a `create()` method that produces a new `ostr` instance.
###Code
class ostr(ostr):
def create(self, res, origin=None):
return ostr(res, taint=self.taint, origin=origin)
thello = ostr('hello', taint='HIGH')
tworld = thello.create('world', origin=6)
tworld.origin
tworld.taint
assert (thello.origin, tworld.origin) == (
[0, 1, 2, 3, 4], [6, 7, 8, 9, 10])
###Output
_____no_output_____
###Markdown
IndexIn Python, indexing is provided through `__getitem__()`. Indexing on positive integers is simple enough. However, it has two additional wrinkles. The first is that, if the index is negative, that many characters are counted from the end of the string which lies just after the last character. That is, the last character has a negative index `-1`
###Code
class ostr(ostr):
def __getitem__(self, key):
res = super().__getitem__(key)
if isinstance(key, int):
key = len(self) + key if key < 0 else key
return self.create(res, [self.origin[key]])
elif isinstance(key, slice):
return self.create(res, self.origin[key])
else:
assert False
hello = ostr('hello', taint='HIGH')
assert (hello[0], hello[-1]) == ('h', 'o')
hello[0].taint
###Output
_____no_output_____
###Markdown
The other wrinkle is that `__getitem__()` can accept a slice. We discuss this next. SlicesThe Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method, which returns a custom `iterator`.
###Code
class ostr(ostr):
def __iter__(self):
return ostr_iterator(self)
###Output
_____no_output_____
###Markdown
The `__iter__()` method requires a supporting `iterator` object. The `iterator` is used to save the state of the current iteration, which it does by keeping a reference to the original `ostr`, and the current index of iteration `_str_idx`.
###Code
class ostr_iterator():
def __init__(self, ostr):
self._ostr = ostr
self._str_idx = 0
def __next__(self):
if self._str_idx == len(self._ostr):
raise StopIteration
# calls ostr getitem should be ostr
c = self._ostr[self._str_idx]
assert isinstance(c, ostr)
self._str_idx += 1
return c
###Output
_____no_output_____
###Markdown
Bringing all these together:
###Code
thw = ostr('hello world', taint='HIGH')
thw[0:5]
assert thw[0:5].has_taint()
assert thw[0:5].has_origin()
thw[0:5].taint
thw[0:5].origin
###Output
_____no_output_____
###Markdown
Splits
###Code
def make_split_wrapper(fun):
def proxy(self, *args, **kwargs):
lst = fun(self, *args, **kwargs)
return [self.create(elem) for elem in lst]
return proxy
for name in ['split', 'rsplit', 'splitlines']:
fun = getattr(str, name)
setattr(ostr, name, make_split_wrapper(fun))
thello = ostr('hello world', taint='LOW')
thello == 'hello world'
thello.split()[0].taint
###Output
_____no_output_____
###Markdown
(Exercise for the reader: handle _partitions_, i.e., splitting a string by substrings) ConcatenationIf two origined strings are concatenated together, it may be desirable to transfer the origins from each to the corresponding portion of the resulting string. The concatenation of strings is accomplished by overriding `__add__()`.
###Code
class ostr(ostr):
def __add__(self, other):
if isinstance(other, ostr):
return self.create(str.__add__(self, other),
(self.origin + other.origin))
else:
return self.create(str.__add__(self, other),
(self.origin + [self.UNKNOWN_ORIGIN for i in other]))
###Output
_____no_output_____
###Markdown
Testing concatenations between two `ostr` instances:
###Code
thello = ostr("hello")
tworld = ostr("world", origin=6)
thw = thello + tworld
assert thw.origin == [0, 1, 2, 3, 4, 6, 7, 8, 9, 10]
###Output
_____no_output_____
###Markdown
What if a `ostr` is concatenated with a `str`?
###Code
space = " "
th_w = thello + space + tworld
assert th_w.origin == [
0,
1,
2,
3,
4,
ostr.UNKNOWN_ORIGIN,
ostr.UNKNOWN_ORIGIN,
6,
7,
8,
9,
10]
###Output
_____no_output_____
###Markdown
One wrinkle here is that when adding a `ostr` and a `str`, the user may place the `str` first, in which case, the `__add__()` method will be called on the `str` instance. Not on the `ostr` instance. However, Python provides a solution. If one defines `__radd__()` on the `ostr` instance, that method will be called rather than `str.__add__()`
###Code
class ostr(ostr):
def __radd__(self, other):
origin = other.origin if isinstance(other, ostr) else [
self.UNKNOWN_ORIGIN for i in other]
return self.create(str.__add__(other, self), (origin + self.origin))
###Output
_____no_output_____
###Markdown
We test it out:
###Code
shello = "hello"
tworld = ostr("world")
thw = shello + tworld
assert thw.origin == [ostr.UNKNOWN_ORIGIN] * len(shello) + [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
These methods: `slicing` and `concatenation` is sufficient to implement other string methods that result in a string, and does not change the character underneath (i.e no case change). Hence, we look at a helper method next. Extract Origin StringGiven a specific input index, the method `x()` extracts the corresponding origined portion from a `ostr`. As a convenience it supports `slices` along with `ints`.
###Code
class ostr(ostr):
class TaintException(Exception):
pass
def x(self, i=0):
if not self.origin:
raise origin.TaintException('Invalid request idx')
if isinstance(i, int):
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j == i]]
elif isinstance(i, slice):
r = range(i.start or 0, i.stop or len(self), i.step or 1)
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j in r]]
thw = ostr('hello world', origin=100)
assert thw.x(101) == ['e']
assert thw.x(slice(101, 105)) == ['e', 'l', 'l', 'o']
###Output
_____no_output_____
###Markdown
Replace The `replace()` method replaces a portion of the string with another.
###Code
class ostr(ostr):
def replace(self, a, b, n=None):
old_origin = self.origin
b_origin = b.origin if isinstance(
b, ostr) else [self.UNKNOWN_ORIGIN] * len(b)
mystr = str(self)
i = 0
while True:
if n and i >= n:
break
idx = mystr.find(a)
if idx == -1:
break
last = idx + len(a)
mystr = mystr.replace(a, b, 1)
partA, partB = old_origin[0:idx], old_origin[last:]
old_origin = partA + b_origin + partB
i += 1
return self.create(mystr, old_origin)
my_str = ostr("aa cde aa")
res = my_str.replace('aa', 'bb')
assert res, res.origin == ('bb', 'cde', 'bb',
[self.UNKNOWN_ORIGIN, self.UNKNOWN_ORIGIN,
2, 3, 4, 5, 6,
self.UNKNOWN_ORIGIN, self.UNKNOWN_ORIGIN])
my_str = ostr("aa cde aa")
res = my_str.replace('aa', ostr('bb', origin=100))
assert (
res, res.origin) == (
('bb cde bb'), [
100, 101, 2, 3, 4, 5, 6, 100, 101])
###Output
_____no_output_____
###Markdown
Split We essentially have to re-implement split operations, and split by space is slightly different from other splits.
###Code
class ostr(ostr):
def _split_helper(self, sep, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = len(sep)
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
first_idx = last_idx + sep_len
return result_list
def _split_space(self, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = 0
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
v = str(self[last_idx:])
sep_len = len(v) - len(v.lstrip(' '))
first_idx = last_idx + sep_len
return result_list
def rsplit(self, sep=None, maxsplit=-1):
splitted = super().rsplit(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
def split(self, sep=None, maxsplit=-1):
splitted = super().split(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
my_str = ostr('ab cdef ghij kl')
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([0, 1], [3, 4, 5, 6], [8, 9, 10, 11], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 15)))
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert(ab.origin, cdef.origin, kl.origin) == ([0, 1], [3, 4, 5, 6], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=100, taint='HIGH')
ab, cdef, ghij, kl = my_str.rsplit()
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([100, 101], [105, 106, 107, 108], [110, 111, 112, 113],
[118, 119])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 20)), taint='HIGH')
ab, cdef, ghij, kl = my_str.split()
assert (ab.origin, cdef.origin, kl.origin) == ([0, 1], [5, 6, 7, 8], [18, 19])
assert ab.taint == 'HIGH'
###Output
_____no_output_____
###Markdown
Strip
###Code
class ostr(ostr):
def strip(self, cl=None):
return self.lstrip(cl).rstrip(cl)
def lstrip(self, cl=None):
res = super().lstrip(cl)
i = self.find(res)
return self[i:]
def rstrip(self, cl=None):
res = super().rstrip(cl)
return self[0:len(res)]
my_str1 = ostr(" abc ")
v = my_str1.strip()
assert v, v.origin == ('abc', [2, 3, 4])
my_str1 = ostr(" abc ")
v = my_str1.lstrip()
assert (v, v.origin) == ('abc ', [2, 3, 4, 5, 6])
my_str1 = ostr(" abc ")
v = my_str1.rstrip()
assert (v, v.origin) == (' abc', [0, 1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
Expand Tabs
###Code
class ostr(ostr):
def expandtabs(self, n=8):
parts = self.split('\t')
res = super().expandtabs(n)
all_parts = []
for i, p in enumerate(parts):
all_parts.extend(p.origin)
if i < len(parts) - 1:
l = len(all_parts) % n
all_parts.extend([p.origin[-1]] * l)
return self.create(res, all_parts)
my_str = str("ab\tcd")
my_ostr = ostr("ab\tcd")
v1 = my_str.expandtabs(4)
v2 = my_ostr.expandtabs(4)
assert str(v1) == str(v2)
assert (len(v1), repr(v2), v2.origin) == (6, "'ab cd'", [0, 1, 1, 1, 3, 4])
class ostr(ostr):
def join(self, iterable):
mystr = ''
myorigin = []
sep_origin = self.origin
lst = list(iterable)
for i, s in enumerate(lst):
sorigin = s.origin if isinstance(s, ostr) else [
self.UNKNOWN_ORIGIN] * len(s)
myorigin.extend(sorigin)
mystr += str(s)
if i < len(lst) - 1:
myorigin.extend(sep_origin)
mystr += str(self)
res = super().join(iterable)
assert len(res) == len(mystr)
return self.create(res, myorigin)
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104])
v4 = ostr('').join([v2, v3, v1])
assert (
v4, v4.origin) == (
'cdefab', [
103, 104, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 100, 101])
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104])
v4 = ostr(',').join([v2, v3, v1])
assert (v4, v4.origin) == ('cd,ef,ab',
[103, 104, 0, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 0, 100, 101])
###Output
_____no_output_____
###Markdown
Partitions
###Code
class ostr(ostr):
def partition(self, sep):
partA, sep, partB = super().partition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
def rpartition(self, sep):
partA, sep, partB = super().rpartition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
###Output
_____no_output_____
###Markdown
Justify
###Code
class ostr(ostr):
def ljust(self, width, fillchar=' '):
res = super().ljust(width, fillchar)
initial = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, [t] * initial + self.origin)
class ostr(ostr):
def rjust(self, width, fillchar=' '):
res = super().rjust(width, fillchar)
final = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, self.origin + [t] * final)
###Output
_____no_output_____
###Markdown
mod
###Code
class ostr(ostr):
def __mod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
s_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = self.find('%s')
assert i >= 0
res = super().__mod__(s)
r_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
class ostr(ostr):
def __rmod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
r_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = s.find('%s')
assert i >= 0
res = super().__rmod__(s)
s_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
a = ostr('hello %s world', origin=100)
a
(a % 'good').origin
b = 'hello %s world'
c = ostr('bad', origin=10)
(b % c).origin
###Output
_____no_output_____
###Markdown
String methods that do not change origin
###Code
class ostr(ostr):
def swapcase(self):
return self.create(str(self).swapcase(), self.origin)
def upper(self):
return self.create(str(self).upper(), self.origin)
def lower(self):
return self.create(str(self).lower(), self.origin)
def capitalize(self):
return self.create(str(self).capitalize(), self.origin)
def title(self):
return self.create(str(self).title(), self.origin)
a = ostr('aa', origin=100).upper()
a, a.origin
###Output
_____no_output_____
###Markdown
General wrappers These are not strictly needed for operation, but can be useful for tracing.
###Code
def make_str_wrapper(fun):
def proxy(*args, **kwargs):
res = fun(*args, **kwargs)
return res
return proxy
import inspect
import types
def informationflow_init_2():
ostr_members = [name for name, fn in inspect.getmembers(ostr, callable)
if isinstance(fn, types.FunctionType) and fn.__qualname__.startswith('ostr')]
for name, fn in inspect.getmembers(str, callable):
if name not in set(['__class__', '__new__', '__str__', '__init__',
'__repr__', '__getattribute__']) | set(ostr_members):
setattr(ostr, name, make_str_wrapper(fn))
informationflow_init_2()
INITIALIZER_LIST.append(informationflow_init_2)
###Output
_____no_output_____
###Markdown
Methods yet to be translated These methods generate strings from other strings. However, we do not have the right implementations for any of these. Hence these are marked as dangerous until we can generate the right translations.
###Code
def make_str_abort_wrapper(fun):
def proxy(*args, **kwargs):
raise ostr.TaintException(
'%s Not implemented in `ostr`' %
fun.__name__)
return proxy
def informationflow_init_3():
for name, fn in inspect.getmembers(str, callable):
# Omitted 'splitlines' as this is needed for formatting output in
# IPython/Jupyter
if name in ['__format__', 'format_map', 'format',
'__mul__', '__rmul__', 'center', 'zfill', 'decode', 'encode']:
setattr(ostr, name, make_str_abort_wrapper(fn))
informationflow_init_3()
INITIALIZER_LIST.append(informationflow_init_3)
###Output
_____no_output_____
###Markdown
While generating proxy wrappers for string operations can handle most common cases of transmission of information flow, some of the operations involving strings can not be overridden. For example, consider the following. Checking OriginsWith all this implemented, we now have full-fledged `ostr` strings where we can easily check the origin of each and every character. To check whether a string originates from another string, we can convert the origin to a set and resort to standard set operations:
###Code
s = ostr("hello", origin=100)
s[1]
s[1].origin
set(s[1].origin) <= set(s.origin)
t = ostr("world", origin=200)
set(s.origin) <= set(t.origin)
u = s + t + "!"
u.origin
ostr.UNKNOWN_ORIGIN in u.origin
###Output
_____no_output_____
###Markdown
Privacy Leaks RevisitedLet us apply it to see whether we can come up with a satisfactory solution for checking the `heartbeat()` function against information leakage.
###Code
SECRET_ORIGIN = 1000
###Output
_____no_output_____
###Markdown
We define a "secret" that must not leak out:
###Code
secret = ostr('<again, some super-secret input>', origin=SECRET_ORIGIN)
###Output
_____no_output_____
###Markdown
Each and every character in `secret` has an origin starting with `SECRET_ORIGIN`:
###Code
print(secret.origin)
###Output
[1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031]
###Markdown
If we now invoke `heartbeat()` with a given string, the origin of the reply should all be `UNKNOWN_ORIGIN` (from the input), and none of the characters should have a `SECRET_ORIGIN`.
###Code
s = heartbeat('hello', 5, memory=secret)
s
print(s.origin)
###Output
[-1, -1, -1, -1, -1]
###Markdown
We can verify that the secret did not leak out by formulating appropriate assertions:
###Code
assert s.origin == [ostr.UNKNOWN_ORIGIN] * len(s)
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in s.origin)
assert not any(origin >= SECRET_ORIGIN for origin in s.origin)
###Output
_____no_output_____
###Markdown
All assertions pass, again confirming that no secret leaked out. Let us now go and exploit `heartbeat()` to reveal its secrets. As `heartbeat()` is unchanged, it is as vulnerable as it was:
###Code
s = heartbeat('hello', 32, memory=secret)
s
###Output
_____no_output_____
###Markdown
Now, however, the reply _does_ contain secret information:
###Code
print(s.origin)
with ExpectError():
assert s.origin == [ostr.UNKNOWN_ORIGIN] * len(s)
with ExpectError():
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in s.origin)
with ExpectError():
assert not any(origin >= SECRET_ORIGIN for origin in s.origin)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_18538/4073057292.py", line 2, in <module>
assert not any(origin >= SECRET_ORIGIN for origin in s.origin)
AssertionError (expected)
###Markdown
We can now integrate these assertions into the `heartbeat()` function, causing it to fail before leaking information. Additionally (or alternatively?), we can also rewrite our output functions not to give out any secret information. We will leave these two exercises for the reader. Taint-Directed FuzzingThe previous _Taint Aware Fuzzing_ was a bit unsatisfactory in that we could not focus on the specific parts of the grammar that led to dangerous operations. We fix that with _taint directed fuzzing_ using `TrackingDB`.The idea here is to track the origins of each character that reaches `eval`. Then, track it back to the grammar nodes that generated it, and increase the probability of using those nodes again. TrackingDBThe `TrackingDB` is similar to `TaintedDB`. The difference is that, if we find that the execution has reached the `my_eval`, we simply raise the `Tainted`.
###Code
class TrackingDB(TaintedDB):
def my_eval(self, statement, g, l):
if statement.origin:
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
Next, we need a specially crafted fuzzer that preserves the taints. TaintedGrammarFuzzerWe define a `TaintedGrammarFuzzer` class that ensures that the taints propagate to the derivation tree. This is similar to the `GrammarFuzzer` from the [chapter on grammar fuzzers](GrammarFuzzer.ipynb) except that the origins and taints are preserved.
###Code
import random
from Grammars import START_SYMBOL
from GrammarFuzzer import GrammarFuzzer
from Parser import canonical
class TaintedGrammarFuzzer(GrammarFuzzer):
def __init__(self,
grammar,
start_symbol=START_SYMBOL,
expansion_switch=1,
log=False):
self.tainted_start_symbol = ostr(
start_symbol, origin=[1] * len(start_symbol))
self.expansion_switch = expansion_switch
self.log = log
self.grammar = grammar
self.c_grammar = canonical(grammar)
self.init_tainted_grammar()
def expansion_cost(self, expansion, seen=set()):
symbols = [e for e in expansion if e in self.c_grammar]
if len(symbols) == 0:
return 1
if any(s in seen for s in symbols):
return float('inf')
return sum(self.symbol_cost(s, seen) for s in symbols) + 1
def fuzz_tree(self):
tree = (self.tainted_start_symbol, [])
nt_leaves = [tree]
expansion_trials = 0
while nt_leaves:
idx = random.randint(0, len(nt_leaves) - 1)
key, children = nt_leaves[idx]
expansions = self.ct_grammar[key]
if expansion_trials < self.expansion_switch:
expansion = random.choice(expansions)
else:
costs = [self.expansion_cost(e) for e in expansions]
m = min(costs)
all_min = [i for i, c in enumerate(costs) if c == m]
expansion = expansions[random.choice(all_min)]
new_leaves = [(token, []) for token in expansion]
new_nt_leaves = [e for e in new_leaves if e[0] in self.ct_grammar]
children[:] = new_leaves
nt_leaves[idx:idx + 1] = new_nt_leaves
if self.log:
print("%-40s" % (key + " -> " + str(expansion)))
expansion_trials += 1
return tree
def fuzz(self):
self.derivation_tree = self.fuzz_tree()
return self.tree_to_string(self.derivation_tree)
###Output
_____no_output_____
###Markdown
We use a specially prepared tainted grammar for fuzzing. We mark each individual definition, each individual rule, and each individual token with a separate origin (we chose a token boundary of 10 here, after inspecting the grammar). This allows us to track exactly which parts of the grammar were involved in the operations we are interested in.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def init_tainted_grammar(self):
key_increment, alt_increment, token_increment = 1000, 100, 10
key_origin = key_increment
self.ct_grammar = {}
for key, val in self.c_grammar.items():
key_origin += key_increment
os = []
for v in val:
ts = []
key_origin += alt_increment
for t in v:
nt = ostr(t, origin=key_origin)
key_origin += token_increment
ts.append(nt)
os.append(ts)
self.ct_grammar[key] = os
# a use tracking grammar
self.ctp_grammar = {}
for key, val in self.ct_grammar.items():
self.ctp_grammar[key] = [(v, dict(use=0)) for v in val]
###Output
_____no_output_____
###Markdown
As before, we initialize the `TrackingDB`
###Code
trdb = TrackingDB(db.db)
###Output
_____no_output_____
###Markdown
Finally, we need to ensure that the taints are preserved, when the tree is converted back to a string. For this, we define the `tainted_tree_to_string()`
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def tree_to_string(self, tree):
symbol, children, *_ = tree
e = ostr('')
if children:
return e.join([self.tree_to_string(c) for c in children])
else:
return e if symbol in self.c_grammar else symbol
###Output
_____no_output_____
###Markdown
We define `update_grammar()` that accepts a set of origins that reached the dangerous operations and the derivation tree of the original string used for fuzzing to update the enhanced grammar.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def update_grammar(self, origin, dtree):
def update_tree(dtree, origin):
key, children = dtree
if children:
updated_children = [update_tree(c, origin) for c in children]
corigin = set.union(
*[o for (key, children, o) in updated_children])
corigin = corigin.union(set(key.origin))
return (key, children, corigin)
else:
my_origin = set(key.origin).intersection(origin)
return (key, [], my_origin)
key, children, oset = update_tree(dtree, set(origin))
for key, alts in self.ctp_grammar.items():
for alt, o in alts:
alt_origins = set([i for token in alt for i in token.origin])
if alt_origins.intersection(oset):
o['use'] += 1
###Output
_____no_output_____
###Markdown
With these, we are now ready to fuzz.
###Code
def tree_type(tree):
key, children = tree
return (type(key), key, [tree_type(c) for c in children])
tgf = TaintedGrammarFuzzer(INVENTORY_GRAMMAR_F)
x = None
for _ in range(10):
qtree = tgf.fuzz_tree()
query = tgf.tree_to_string(qtree)
assert isinstance(query, ostr)
try:
print(repr(query))
res = trdb.sql(query)
print(repr(res))
except SQLException as e:
print(e)
except Tainted as e:
print(e)
origin = e.args[0].origin
tgf.update_grammar(origin, qtree)
except:
traceback.print_exc()
break
print()
###Output
'select (g!=(9)!=((:)==2==9)!=J)==-7 from inventory'
Tainted[((g!=(9)!=((:)==2==9)!=J)==-7)]
'delete from inventory where ((c)==T)!=5==(8!=Y)!=-5'
Tainted[((c)==T)!=5==(8!=Y)!=-5]
'select (((w==(((X!=------8)))))) from inventory'
Tainted[((((w==(((X!=------8)))))))]
'delete from inventory where ((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))'
Tainted[((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))]
'delete from inventory where ((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))'
Tainted[((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))]
'delete from inventory where o!=2==---5==3!=t'
Tainted[o!=2==---5==3!=t]
'select (2) from inventory'
Tainted[((2))]
'select _ from inventory'
Tainted[(_)]
'select L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)) from inventory'
Tainted[(L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)))]
'delete from inventory where _==(7==(9)!=(---5)==1)==-8'
Tainted[_==(7==(9)!=(---5)==1)==-8]
###Markdown
We can now inspect our enhanced grammar to see how many times each rule was used.
###Code
tgf.ctp_grammar
###Output
_____no_output_____
###Markdown
From here, the idea is to focus on the rules that reached dangerous operations more often, and increase the probability of the values of that kind. The Limits of Taint TrackingWhile our framework can detect information leakage, it is by no means perfect. There are several ways in which taints can get lost and information thus may still leak out. ConversionsWe only track taints and origins through _strings_ and _characters_. If we convert these to numbers (or other data), the information is lost. As an example, consider this function, converting individual characters to numbers and back:
###Code
def strip_all_info(s):
t = ""
for c in s:
t += chr(ord(c))
return t
thello = ostr("Secret")
thello
thello.origin
###Output
_____no_output_____
###Markdown
The taints and origins will not propagate through the number conversion:
###Code
thello_stripped = strip_all_info(thello)
thello_stripped
with ExpectError():
thello_stripped.origin
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_18538/588526133.py", line 2, in <module>
thello_stripped.origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
This issue could be addressed by extending numbers with taints and origins, just as we did for strings. At some point, however, this will still break down, because as soon as an internal C function in the Python library is reached, the taint will not propagate into and across the C function. (Unless one starts implementing dynamic taints for these, that is.) Internal C libraries As we mentioned before, calls to _internal_ C libraries do not propagate taints. For example, while the following preserves the taints,
###Code
hello = ostr('hello', origin=100)
world = ostr('world', origin=200)
(hello + ' ' + world).origin
###Output
_____no_output_____
###Markdown
a call to a `join` that should be equivalent will fail.
###Code
with ExpectError():
''.join([hello, ' ', world]).origin
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_18538/741210424.py", line 2, in <module>
''.join([hello, ' ', world]).origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
Implicit Information FlowEven if one could taint all data in a program, there still would be means to break information flow – notably by turning explicit flow into _implicit_ flow, or data flow into _control flow_. Here is an example:
###Code
def strip_all_info_again(s):
t = ""
for c in s:
if c == 'a':
t += 'a'
elif c == 'b':
t += 'b'
elif c == 'c':
t += 'c'
...
###Output
_____no_output_____
###Markdown
With such a function, there is no explicit data flow between the characters in `s` and the characters in `t`; yet, the strings would be identical. This problem frequently occurs in programs that process and manipulate external input. Enforcing TaintingBoth, conversions and implicit information flow are one of several possibilities how taint and origin information get lost. To address the problem, the best solution is to _always assume the worst from untainted strings_:* As it comes to trust, an untainted string should be treated as _possibly untrusted_, and hence not relied upon unless sanitized.* As it comes to privacy, an untainted string should be treated as _possibly secret_, and hence not leaked out.As a consequence, your program should always have two kinds of taints: one for explicitly trusted (or secret) and one for explicitly untrusted (or non-secret). If a taint gets lost along the way, you will may have to restore it from its sources – not unlike the string methods discussed above. The benefit is a trusted application, in which each and every information flow can be checked at runtime, with violations quickly discovered through automated tests. Synopsis This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. For tracking information on security properties, use `tstr` as follows:
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any operation from `thello` that results in a string fragment would include the correct taint. For example:
###Code
thello[1:2].taint
###Output
_____no_output_____
###Markdown
For tracking the originating indexes from the input string, use `ostr` as follows:
###Code
ohw = ostr("hello\tworld", origin=100)
###Output
_____no_output_____
###Markdown
The originating indexes can be recovered as follows:
###Code
(ohw[0:4] +"-"+ ohw[6:]).origin
###Output
_____no_output_____
###Markdown
Lessons Learned* String-based and character-based taints allow to dynamically track the information flow from input to the internals of a system and back to the output.* Checking taints allows to discover untrusted inputs and information leakage at runtime.* Data conversions and implicit data flow may strip taint information; the resulting untainted strings should be treated as having the worst possible taint.* Taints can be used in conjunction with fuzzing to provide a more robust indication of incorrect behavior than to simply rely on program crashes. Next StepsAn even better alternative to our taint-directed fuzzing is to make use of _symbolic_ techniques that take the semantics of the program under test into account. The chapter on [flow fuzzing](FlowFuzzer.ipynb) introduces these symbolic techniques for the purpose of exploring information flows; the subsequent chapter on [symbolic fuzzing](SymbolicFuzzer.ipynb) then shows how to make full-fledged use of symbolic execution for covering code. Similarly, [search based fuzzing](SearchBasedFuzzer.ipynb) can often provide a cheaper exploration strategy. BackgroundTaint analysis on Python using a library approach as we implemented in this chapter was discussed by Conti et al. \cite{Conti2010}. Exercises Exercise 1: Tainted NumbersIntroduce a class `tint` (for tainted integer) that, like `tstr`, has a taint attribute that gets passed on from `tint` to `tint`. Part 1: CreationImplement the `tint` class such that taints are set:```pythonx = tint(42, taint='SECRET')assert x.taint == 'SECRET'``` **Solution.** This is pretty straightforward, as we can apply the same scheme as for `tstr`:
###Code
class tint(int):
def __new__(cls, value, *args, **kw):
return int.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
x = tint(42, taint='SECRET')
assert x.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 2: Arithmetic expressionsEnsure that taints get passed along arithmetic expressions; support addition, subtraction, multiplication, and division operators.```pythony = x + 1assert y.taint == 'SECRET'``` **Solution.** As with `tstr`, we implement a `create()` method and a convenience function to quickly define all arithmetic operations:
###Code
class tint(tint):
def create(self, n):
# print("New tint from", n)
return tint(n, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_int_wrapper()` function creates a wrapper around an existing `int` method which attaches the taint to the result of the method:
###Code
def make_int_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
# print(fun, args, kwargs, "=", repr(res))
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all arithmetic operators:
###Code
for name in ['__add__', '__radd__', '__mul__', '__rmul__', '__sub__',
'__floordiv__', '__truediv__']:
fun = getattr(int, name)
setattr(tint, name, make_int_wrapper(fun))
x = tint(42, taint='SECRET')
y = x + 1
y.taint
###Output
_____no_output_____
###Markdown
Part 3: Passing taints from integers to stringsConverting a tainted integer into a string (using `repr()`) should yield a tainted string:```pythons = repr(x)assert s.taint == 'SECRET'``` **Solution.** We define the string conversion functions such that they return a tainted string (`tstr`):
###Code
class tint(tint):
def __repr__(self):
s = int.__repr__(self)
return tstr(s, taint=self.taint)
class tint(tint):
def __str__(self):
return tstr(int.__str__(self), taint=self.taint)
x = tint(42, taint='SECRET')
s = repr(x)
assert s.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 4: Passing taints from strings to integersConverting a tainted object (with a `taint` attribute) to an integer should pass that taint:```pythonpassword = tstr('1234', taint='NOT_EXACTLY_SECRET')x = tint(password)assert x == 1234assert x.taint == 'NOT_EXACTLY_SECRET'``` **Solution.** This can be done by having the `__init__()` constructor check for a `taint` attibute:
###Code
class tint(tint):
def __init__(self, value, taint=None, **kwargs):
if taint is not None:
self.taint = taint
else:
self.taint = getattr(value, 'taint', None)
password = tstr('1234', taint='NOT_EXACTLY_SECRET')
x = tint(password)
assert x == 1234
assert x.taint == 'NOT_EXACTLY_SECRET'
###Output
_____no_output_____
###Markdown
Tracking Information FlowWe have explored how one could generate better inputs that can penetrate deeper into the program in question. While doing so, we have relied on program crashes to tell us that we have succeeded in finding problems in the program. However, that is rather simplistic. What if the behavior of the program is simply incorrect, but does not lead to a crash? Can one do better?In this chapter, we explore in depth how to track information flows in Python, and how these flows can be used to determine whether a program behaved as expected.
###Code
from bookutils import YouTubeVideo
YouTubeVideo('MJ0VGzVbhYc')
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [chapter on coverage](Coverage.ipynb).* You should have read the [chapter on probabilistic fuzzing](ProbabilisticGrammarFuzzer.ipynb). We first set up our infrastructure so that we can make use of previously defined functions.
###Code
import bookutils
from typing import List, Any, Optional, Union
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.InformationFlow import ```and then make use of the following features.This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:```python>>> thello = tstr('hello', taint='LOW')```A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:```python>>> thello[:4]'hell'```However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:```python>>> thello.taint'LOW'```The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:```python>>> thello[1:2].taint type: ignore'LOW'````tstr` objects duplicate most `str` methods, as indicated in the class diagram: Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:```python>>> secret = ostr("joshua1234", origin=100, taint='SECRET')```The `origin` attribute of an `ostr` provides access to a list of indexes:```python>>> secret.origin[100, 101, 102, 103, 104, 105, 106, 107, 108, 109]>>> secret.taint'SECRET'````ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:```python>>> secret_substr = (secret[0:4] + "-" + secret[6:])>>> secret_substr.taint'SECRET'>>> secret_substr.origin[100, 101, 102, 103, -1, 106, 107, 108, 109]````ostr` objects duplicate most `str` methods, as indicated in the class diagram: A Vulnerable DatabaseSay we want to implement an *in-memory database* service in Python. Here is a rather flimsy attempt. We use the following dataset.
###Code
INVENTORY = """\
1997,van,Ford,E350
2000,car,Mercury,Cougar
1999,car,Chevy,Venture\
"""
VEHICLES = INVENTORY.split('\n')
###Output
_____no_output_____
###Markdown
Our DB is a Python class that parses its arguments and throws `SQLException` which is defined below.
###Code
class SQLException(Exception):
pass
###Output
_____no_output_____
###Markdown
The database is simply a Python `dict` that is exposed only through SQL queries.
###Code
class DB:
def __init__(self, db={}):
self.db = dict(db)
###Output
_____no_output_____
###Markdown
Representing TablesThe database contains tables, which are created by a method call `create_table()`. Each table data structure is a pair of values. The first one is the meta data containing column names and types. The second value is a list of values in the table.
###Code
class DB(DB):
def create_table(self, table, defs):
self.db[table] = (defs, [])
###Output
_____no_output_____
###Markdown
The table can be retrieved using the name using the `table()` method call.
###Code
class DB(DB):
def table(self, t_name):
if t_name in self.db:
return self.db[t_name]
raise SQLException('Table (%s) was not found' % repr(t_name))
###Output
_____no_output_____
###Markdown
Here is an example of how to use both. We fill a table `inventory` with four columns: `year`, `kind`, `company`, and `model`. Initially, our table is empty.
###Code
def sample_db():
db = DB()
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
return db
###Output
_____no_output_____
###Markdown
Using `table()`, we can retrieve the table definition as well as its contents.
###Code
db = sample_db()
db.table('inventory')
###Output
_____no_output_____
###Markdown
We also define `column()` for retrieving the column definition from a table declaration.
###Code
class DB(DB):
def column(self, table_decl, c_name):
if c_name in table_decl:
return table_decl[c_name]
raise SQLException('Column (%s) was not found' % repr(c_name))
db = sample_db()
decl, rows = db.table('inventory')
db.column(decl, 'year')
###Output
_____no_output_____
###Markdown
Executing SQL StatementsThe `sql()` method of `DB` executes SQL statements. It inspects its arguments, and dispatches the query based on the kind of SQL statement to be executed.
###Code
class DB(DB):
def do_select(self, query):
...
def do_update(self, query):
...
def do_insert(self, query):
...
def do_delete(self, query):
...
def sql(self, query):
methods = [('select ', self.do_select),
('update ', self.do_update),
('insert into ', self.do_insert),
('delete from', self.do_delete)]
for key, method in methods:
if query.startswith(key):
return method(query[len(key):])
raise SQLException('Unknown SQL (%s)' % query)
###Output
_____no_output_____
###Markdown
Here's an example of how to use the `DB` class:
###Code
some_db = DB()
some_db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
However, at this point, the individual methods for handling SQL statements are not yet defined. Let us do this in the next steps. Excursion: Implementing SQL Statements Selecting DataThe `do_select()` method handles SQL `select` statements to retrieve data from a table.
###Code
class DB(DB):
def do_select(self, query):
FROM, WHERE = ' from ', ' where '
table_start = query.find(FROM)
if table_start < 0:
raise SQLException('no table specified')
where_start = query.find(WHERE)
select = query[:table_start]
if where_start >= 0:
t_name = query[table_start + len(FROM):where_start]
where = query[where_start + len(WHERE):]
else:
t_name = query[table_start + len(FROM):]
where = ''
_, table = self.table(t_name)
if where:
selected = self.expression_clause(table, "(%s)" % where)
selected_rows = [hm for i, data, hm in selected if data]
else:
selected_rows = table
rows = self.expression_clause(selected_rows, "(%s)" % select)
return [data for i, data, hm in rows]
###Output
_____no_output_____
###Markdown
The `expression_clause()` method is used for two purposes:1. In the form `select` $x$, $y$, $z$ `from` $t$, it _evaluates_ (and returns) the expressions $x$, $y$, $z$ in the contexts of the selected rows.2. If a clause `where` $p$ is given, it also evaluates $p$ in the context of the rows and includes the rows in the selection only if $p$ holds.To evaluate expressions like $x$, $y$, $z$ or $p$, the method `expression_clause()` makes use of the Python `eval()` evaluation function.
###Code
class DB(DB):
def expression_clause(self, table, statement):
selected = []
for i, hm in enumerate(table):
selected.append((i, self.my_eval(statement, {}, hm), hm))
return selected
###Output
_____no_output_____
###Markdown
If `eval()` fails for whatever reason, we raise an exception:
###Code
class DB(DB):
def my_eval(self, statement, g, l):
try:
return eval(statement, g, l)
except Exception:
raise SQLException('Invalid WHERE (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
**Note:** Using `eval()` here introduces some important security issues, which we will discuss later in this chapter. Here's how we can use `sql()` to issue a query. Note that the table is yet empty.
###Code
db = sample_db()
db.sql('select year from inventory')
db = sample_db()
db.sql('select year from inventory where year == 2018')
###Output
_____no_output_____
###Markdown
Inserting DataThe `do_insert()` method handles SQL `insert` statements.
###Code
class DB(DB):
def do_insert(self, query):
VALUES = ' values '
table_end = query.find('(')
t_name = query[:table_end].strip()
names_end = query.find(')')
decls, table = self.table(t_name)
names = [i.strip() for i in query[table_end + 1:names_end].split(',')]
# verify columns exist
for k in names:
self.column(decls, k)
values_start = query.find(VALUES)
if values_start < 0:
raise SQLException('Invalid INSERT (%s)' % repr(query))
values = [
i.strip() for i in query[values_start + len(VALUES) + 1:-1].split(',')
]
if len(names) != len(values):
raise SQLException(
'names(%s) != values(%s)' % (repr(names), repr(values)))
# dict lookups happen in C code, so we can't use that
kvs = {}
for k,v in zip(names, values):
for key,kval in decls.items():
if k == key:
kvs[key] = self.convert(kval, v)
table.append(kvs)
###Output
_____no_output_____
###Markdown
In SQL, a column can come in any supported data type. To ensure it is stored using the type originally declared, we need the ability to convert the values to specific types which is provided by `convert()`.
###Code
import ast
class DB(DB):
def convert(self, cast, value):
try:
return cast(ast.literal_eval(value))
except:
raise SQLException('Invalid Conversion %s(%s)' % (cast, value))
###Output
_____no_output_____
###Markdown
Here is an example of how to use the SQL `insert` command:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.table('inventory')
###Output
_____no_output_____
###Markdown
With the database filled, we can also run more complex queries:
###Code
db.sql('select year + 1, kind from inventory')
db.sql('select year, kind from inventory where year == 1997')
###Output
_____no_output_____
###Markdown
Updating DataSimilarly, `do_update()` handles SQL `update` statements.
###Code
class DB(DB):
def do_update(self, query):
SET, WHERE = ' set ', ' where '
table_end = query.find(SET)
if table_end < 0:
raise SQLException('Invalid UPDATE (%s)' % repr(query))
set_end = table_end + 5
t_name = query[:table_end]
decls, table = self.table(t_name)
names_end = query.find(WHERE)
if names_end >= 0:
names = query[set_end:names_end]
where = query[names_end + len(WHERE):]
else:
names = query[set_end:]
where = ''
sets = [[i.strip() for i in name.split('=')]
for name in names.split(',')]
# verify columns exist
for k, v in sets:
self.column(decls, k)
if where:
selected = self.expression_clause(table, "(%s)" % where)
updated = [hm for i, d, hm in selected if d]
else:
updated = table
for hm in updated:
for k, v in sets:
# we can not do dict lookups because it is implemented in C.
for key, kval in decls.items():
if key == k:
hm[key] = self.convert(kval, v)
return "%d records were updated" % len(updated)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can update things:
###Code
db.sql('update inventory set year = 1998 where year == 1997')
db.sql('select year from inventory')
db.table('inventory')
###Output
_____no_output_____
###Markdown
Deleting DataFinally, SQL `delete` statements are handled by `do_delete()`.
###Code
class DB(DB):
def do_delete(self, query):
WHERE = ' where '
table_end = query.find(WHERE)
if table_end < 0:
raise SQLException('Invalid DELETE (%s)' % query)
t_name = query[:table_end].strip()
_, table = self.table(t_name)
where = query[table_end + len(WHERE):]
selected = self.expression_clause(table, "%s" % where)
deleted = [i for i, d, hm in selected if d]
for i in sorted(deleted, reverse=True):
del table[i]
return "%d records were deleted" % len(deleted)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can delete data:
###Code
db.sql('delete from inventory where company == "Ford"')
###Output
_____no_output_____
###Markdown
Our database is now empty:
###Code
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
End of Excursion Here is how our database can be used.
###Code
db = DB()
###Output
_____no_output_____
###Markdown
We first create a table in our database with the correct data types.
###Code
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
###Output
_____no_output_____
###Markdown
Here is a simple convenience function to update the table using our dataset.
###Code
def update_inventory(sqldb, vehicle):
inventory_def = sqldb.db['inventory'][0]
k, v = zip(*inventory_def.items())
val = [repr(cast(val)) for cast, val in zip(v, vehicle.split(','))]
sqldb.sql('insert into inventory (%s) values (%s)' % (','.join(k),
','.join(val)))
for V in VEHICLES:
update_inventory(db, V)
###Output
_____no_output_____
###Markdown
Our database now contains the same dataset as `VEHICLES` under `INVENTORY` table.
###Code
db.db
###Output
_____no_output_____
###Markdown
Here is a sample select statement.
###Code
db.sql('select year,kind from inventory')
db.sql("select company,model from inventory where kind == 'car'")
###Output
_____no_output_____
###Markdown
We can run updates on it.
###Code
db.sql("update inventory set year = 1998, company = 'Suzuki' where kind == 'van'")
db.db
###Output
_____no_output_____
###Markdown
It can even do mathematics on the fly!
###Code
db.sql('select int(year)+10 from inventory')
###Output
_____no_output_____
###Markdown
Adding a new row to our table.
###Code
db.sql("insert into inventory (year, kind, company, model) values (1, 'charriot', 'Rome', 'Quadriga')")
db.db
###Output
_____no_output_____
###Markdown
Which we then delete.
###Code
db.sql("delete from inventory where year < 1900")
###Output
_____no_output_____
###Markdown
Fuzzing SQLTo verify that everything is OK, let us fuzz. First we define our grammar. Excursion: Defining a SQL grammar
###Code
import string
from Grammars import START_SYMBOL, Grammar, Expansion, \
is_valid_grammar, extend_grammar
EXPR_GRAMMAR: Grammar = {
"<start>": ["<expr>"],
"<expr>": ["<bexpr>", "<aexpr>", "(<expr>)", "<term>"],
"<bexpr>": [
"<aexpr><lt><aexpr>",
"<aexpr><gt><aexpr>",
"<expr>==<expr>",
"<expr>!=<expr>",
],
"<aexpr>": [
"<aexpr>+<aexpr>", "<aexpr>-<aexpr>", "<aexpr>*<aexpr>",
"<aexpr>/<aexpr>", "<word>(<exprs>)", "<expr>"
],
"<exprs>": ["<expr>,<exprs>", "<expr>"],
"<lt>": ["<"],
"<gt>": [">"],
"<term>": ["<number>", "<word>"],
"<number>": ["<integer>.<integer>", "<integer>", "-<number>"],
"<integer>": ["<digit><integer>", "<digit>"],
"<word>": ["<word><letter>", "<word><digit>", "<letter>"],
"<digit>":
list(string.digits),
"<letter>":
list(string.ascii_letters + '_:.')
}
assert is_valid_grammar(EXPR_GRAMMAR)
PRINTABLE_CHARS: List[str] = [i for i in string.printable
if i not in "<>'\"\t\n\r\x0b\x0c\x00"] + ['<lt>', '<gt>']
INVENTORY_GRAMMAR = extend_grammar(EXPR_GRAMMAR,
{
'<start>': ['<query>'],
'<query>': [
'select <exprs> from <table>',
'select <exprs> from <table> where <bexpr>',
'insert into <table> (<names>) values (<literals>)',
'update <table> set <assignments> where <bexpr>',
'delete from <table> where <bexpr>',
],
'<table>': ['<word>'],
'<names>': ['<column>,<names>', '<column>'],
'<column>': ['<word>'],
'<literals>': ['<literal>', '<literal>,<literals>'],
'<literal>': ['<number>', "'<chars>'"],
'<assignments>': ['<kvp>,<assignments>', '<kvp>'],
'<kvp>': ['<column>=<value>'],
'<value>': ['<word>'],
'<chars>': ['<char>', '<char><chars>'],
'<char>': PRINTABLE_CHARS, # type: ignore
})
assert is_valid_grammar(INVENTORY_GRAMMAR)
###Output
_____no_output_____
###Markdown
As can be seen from the source of our database, the functions always check whether the table name is correct. Hence, we modify the grammar to choose our particular table so that it will have a better chance of reaching deeper. We will see in the later sections how this can be done automatically.
###Code
INVENTORY_GRAMMAR_F = extend_grammar(INVENTORY_GRAMMAR,
{'<table>': ['inventory']})
###Output
_____no_output_____
###Markdown
End of Excursion
###Code
from GrammarFuzzer import GrammarFuzzer
gf = GrammarFuzzer(INVENTORY_GRAMMAR_F)
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = db.sql(query)
print(repr(res))
except SQLException as e:
print("> ", e)
pass
except:
traceback.print_exc()
break
print()
###Output
'select O6fo,-977091.1,-36.46 from inventory'
> Invalid WHERE ('(O6fo,-977091.1,-36.46)')
'select g3 from inventory where -3.0!=V/g/b+Q*M*G'
> Invalid WHERE ('(-3.0!=V/g/b+Q*M*G)')
'update inventory set z=a,x=F_,Q=K where p(M)<_*S'
> Column ('z') was not found
'update inventory set R=L5pk where e*l*y-u>K+U(:)'
> Column ('R') was not found
'select _/d*Q+H/d(k)<t+M-A+P from inventory'
> Invalid WHERE ('(_/d*Q+H/d(k)<t+M-A+P)')
'select F5 from inventory'
> Invalid WHERE ('(F5)')
'update inventory set jWh.=a6 where wcY(M)>IB7(i)'
> Column ('jWh.') was not found
'update inventory set U=y where L(W<c,(U!=W))<V(((q)==m<F),O,l)'
> Column ('U') was not found
'delete from inventory where M/b-O*h*E<H-W>e(Y)-P'
> Invalid WHERE ('M/b-O*h*E<H-W>e(Y)-P')
'select ((kP(86)+b*S+J/Z/U+i(U))) from inventory'
> Invalid WHERE ('(((kP(86)+b*S+J/Z/U+i(U))))')
###Markdown
Fuzzing does not seem to have triggered any crashes. However, are crashes the only errors that we should be worried about? The Evil of EvalIn our database implementation – notably in the `expression_clause()` method -, we have made use of `eval()` to evaluate expressions using the Python interpreter. This allows us to unleash the full power of Python expressions within our SQL statements.
###Code
db.sql('select year from inventory where year < 2000')
###Output
_____no_output_____
###Markdown
In the above query, the clause `year < 2000` is evaluated using `expression_clause()` using Python in the context of each row; hence, `year < 2000` evaluates to either `True` or `False`. The same holds for the expressions being `select`ed:
###Code
db.sql('select year - 1900 if year < 2000 else year - 2000 from inventory')
###Output
_____no_output_____
###Markdown
This works because `year - 1900 if year < 2000 else year - 2000` is a valid Python expression. (It is not a valid SQL expression, though.) The problem with the above is that there is _no limitation_ to what the Python expression can do. What if the user tries the following?
###Code
db.sql('select __import__("os").popen("pwd").read() from inventory')
###Output
_____no_output_____
###Markdown
The above statement effectively reads from the users' file system. Instead of `os.popen("pwd").read()`, it could execute arbitrary Python commands – to access data, install software, run a background process. This is where "the full power of Python expressions" turns back on us. What we want is to allow our _program_ to make full use of its power; yet, the _user_ (or any third party) should not be entrusted to do the same. Hence, we need to differentiate between (trusted) _input from the program_ and (untrusted) _input from the user_. One method that allows such differentiation is that of *dynamic taint analysis*. The idea is to identify the functions that accept user input as *sources* that *taint* any string that comes in through them, and those functions that perform dangerous operations as *sinks*. Finally we bless certain functions as *taint sanitizers*. The idea is that an input from the source should never reach the sink without undergoing sanitization first. This allows us to use a stronger oracle than simply checking for crashes. Tracking String TaintsThere are various levels of taint tracking that one can perform. The simplest is to track that a string fragment originated in a specific environment, and has not undergone a taint removal process. For this, we simply need to wrap the original string with an environment identifier (the _taint_) with `tstr`, and produce `tstr` instances on each operation that results in another string fragment. The attribute `taint` holds a label identifying the environment this instance was derived. A Class for Tainted StringsFor capturing information flows we need a new string class. The idea is to use the new tainted string class `tstr` as a wrapper on the original `str` class. However, `str` is an *immutable* class. Hence, it does not call its `__init__()` method after being constructed. This means that any subclasses of `str` also will not get the `__init__()` method called. If we want to get our initialization routine called, we need to [hook into `__new__()`](https://docs.python.org/3/reference/datamodel.htmlbasic-customization) and return an instance of our own class. We combine this with our initialization code in `__init__()`.
###Code
class tstr(str):
"""Wrapper for strings, saving taint information"""
def __new__(cls, value, *args, **kw):
"""Create a tstr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `tstr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings."""
self.taint: Any = taint
class tstr(tstr):
def __repr__(self) -> tstr:
"""Return a representation."""
return tstr(str.__repr__(self), taint=self.taint)
class tstr(tstr):
def __str__(self) -> str:
"""Convert to string"""
return str.__str__(self)
###Output
_____no_output_____
###Markdown
For example, if we wrap `"hello"` in `tstr`, then we should be able to access its taint:
###Code
thello: tstr = tstr('hello', taint='LOW')
thello.taint
repr(thello).taint # type: ignore
###Output
_____no_output_____
###Markdown
By default, when we wrap a string, it is tainted. Hence we also need a way to clear the taint in the string. One way is to simply return a `str` instance as above. However, one may sometimes wish to remove the taint from an existing instance. This is accomplished with `clear_taint()`. During `clear_taint()`, we simply set the taint to `None`. This method comes with a pair method `has_taint()` which checks whether a `tstr` instance is currently origined.
###Code
class tstr(tstr):
def clear_taint(self):
"""Remove taint"""
self.taint = None
return self
def has_taint(self):
"""Check if taint is present"""
return self.taint is not None
###Output
_____no_output_____
###Markdown
String OperatorsTo propagate the taint, we have to extend string functions, such as operators. We can do so in one single big step, overloading all string methods and operators. When we create a new string from an existing tainted string, we propagate its taint.
###Code
class tstr(tstr):
def create(self, s):
return tstr(s, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_str_wrapper()` function creates a wrapper around an existing string method which attaches the taint to the result of the method:
###Code
class tstr(tstr):
@staticmethod
def make_str_wrapper(fun):
"""Make `fun` (a `str` method) a method in `tstr`"""
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
return self.create(res)
if hasattr(fun, '__doc__'):
# Copy docstring
proxy.__doc__ = fun.__doc__
return proxy
###Output
_____no_output_____
###Markdown
We do this for all string methods that return a string:
###Code
def informationflow_init_1():
for name in ['__format__', '__mod__', '__rmod__', '__getitem__',
'__add__', '__mul__', '__rmul__',
'capitalize', 'casefold', 'center', 'encode',
'expandtabs', 'format', 'format_map', 'join',
'ljust', 'lower', 'lstrip', 'replace',
'rjust', 'rstrip', 'strip', 'swapcase', 'title', 'translate', 'upper']:
fun = getattr(str, name)
setattr(tstr, name, tstr.make_str_wrapper(fun))
informationflow_init_1()
INITIALIZER_LIST = [informationflow_init_1]
def initialize():
for fn in INITIALIZER_LIST:
fn()
###Output
_____no_output_____
###Markdown
The one missing operator is `+` with a regular string on the left side and a tainted string on the right side. Python supports a `__radd__()` method which is invoked if the associated object is used on the right side of an addition.
###Code
class tstr(tstr):
def __radd__(self, value):
"""Return value + self, as a `tstr` object"""
return self.create(value + str(self))
###Output
_____no_output_____
###Markdown
With this, we are already done. Let us create a string `thello` with a taint `LOW`.
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any substring will also be tainted:
###Code
thello[0].taint # type: ignore
thello[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
String additions will return a `tstr` object with the taint:
###Code
(tstr('foo', taint='HIGH') + 'bar').taint # type: ignore
###Output
_____no_output_____
###Markdown
Our `__radd__()` method ensures this also works if the `tstr` occurs on the right side of a string addition:
###Code
('foo' + tstr('bar', taint='HIGH')).taint # type: ignore
thello += ', world' # type: ignore
thello.taint # type: ignore
###Output
_____no_output_____
###Markdown
Other operators such as multiplication also work:
###Code
(thello * 5).taint # type: ignore
('hw %s' % thello).taint # type: ignore
(tstr('hello %s', taint='HIGH') % 'world').taint # type: ignore
###Output
_____no_output_____
###Markdown
Tracking Untrusted InputSo, what can one do with tainted strings? We reconsider the `DB` example. We define a "better" `TrustedDB` which only accepts strings tainted as `"TRUSTED"`.
###Code
class TrustedDB(DB):
def sql(self, s):
assert isinstance(s, tstr), "Need a tainted string"
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
return super().sql(s)
###Output
_____no_output_____
###Markdown
Feeding a string with an "unknown" (i.e., non-existing) trust level will cause `TrustedDB` to fail:
###Code
bdb = TrustedDB(db.db)
from ExpectError import ExpectError
with ExpectError():
bdb.sql("select year from INVENTORY")
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_15398/3935989889.py", line 2, in <module>
bdb.sql("select year from INVENTORY")
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_15398/995123203.py", line 3, in sql
assert isinstance(s, tstr), "Need a tainted string"
AssertionError: Need a tainted string (expected)
###Markdown
Additionally any user input would be originally tagged with `"UNTRUSTED"` as taint. If we place an untrusted string into our better calculator, it will also fail:
###Code
bad_user_input = tstr('__import__("os").popen("ls").read()', taint='UNTRUSTED')
with ExpectError():
bdb.sql(bad_user_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_15398/3307042773.py", line 3, in <module>
bdb.sql(bad_user_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_15398/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
Hence, somewhere along the computation, we have to turn the "untrusted" inputs into "trusted" strings. This process is called *sanitization*. A simple sanitization function for our purposes could ensure that the input consists only of few allowed characters (not including letters or quotes); if this is the case, then the input gets a new `"TRUSTED"` taint. If not, we turn the string into an (untrusted) empty string; other alternatives would be to raise an error or to escape or delete "untrusted" characters.
###Code
import re
def sanitize(user_input):
assert isinstance(user_input, tstr)
if re.match(
r'^select +[-a-zA-Z0-9_, ()]+ from +[-a-zA-Z0-9_, ()]+$', user_input):
return tstr(user_input, taint='TRUSTED')
else:
return tstr('', taint='UNTRUSTED')
good_user_input = tstr("select year,model from inventory", taint='UNTRUSTED')
sanitized_input = sanitize(good_user_input)
sanitized_input
sanitized_input.taint
bdb.sql(sanitized_input)
###Output
_____no_output_____
###Markdown
Let us now try out our untrusted input:
###Code
sanitized_input = sanitize(bad_user_input)
sanitized_input
sanitized_input.taint
with ExpectError():
bdb.sql(sanitized_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_15398/249000876.py", line 2, in <module>
bdb.sql(sanitized_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_15398/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
In a similar fashion, we can prevent SQL and code injections discussed in [the chapter on Web fuzzing](WebFuzzer.ipynb). Taint Aware FuzzingWe can also use tainting to _direct fuzzing to those grammar rules that are likely to generate dangerous inputs._ The idea here is to identify inputs generated by our fuzzer that lead to untrusted execution. First we define the exception to be thrown when a tainted value reaches a dangerous operation.
###Code
class Tainted(Exception):
def __init__(self, v):
self.v = v
def __str__(self):
return 'Tainted[%s]' % self.v
###Output
_____no_output_____
###Markdown
TaintedDBNext, since `my_eval()` is the most dangerous operation in the `DB` class, we define a new class `TaintedDB` that overrides the `my_eval()` to throw an exception whenever an untrusted string reaches this part.
###Code
class TaintedDB(DB):
def my_eval(self, statement, g, l):
if statement.taint != 'TRUSTED':
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
We initialize an instance of `TaintedDB`
###Code
tdb = TaintedDB()
tdb.db = db.db
###Output
_____no_output_____
###Markdown
Then we start fuzzing.
###Code
import traceback
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = tdb.sql(tstr(query, taint='UNTRUSTED'))
print(repr(res))
except SQLException as e:
pass
except Tainted as e:
print("> ", e)
except:
traceback.print_exc()
break
print()
###Output
'delete from inventory where y/u-l+f/y<Y(c)/A-H*q'
> Tainted[y/u-l+f/y<Y(c)/A-H*q]
"insert into inventory (G,Wmp,sl3hku3) values ('<','?')"
"insert into inventory (d0) values (',_G')"
'select P*Q-w/x from inventory where X<j==:==j*r-f'
> Tainted[(X<j==:==j*r-f)]
'select a>F*i from inventory where Q/I-_+P*j>.'
> Tainted[(Q/I-_+P*j>.)]
'select (V-i<T/g) from inventory where T/r/G<FK(m)/(i)'
> Tainted[(T/r/G<FK(m)/(i))]
'select (((i))),_(S,_)/L-k<H(Sv,R,n,W,Y) from inventory'
> Tainted[((((i))),_(S,_)/L-k<H(Sv,R,n,W,Y))]
'select (N==c*U/P/y),i-e/n*y,T!=w,u from inventory'
> Tainted[((N==c*U/P/y),i-e/n*y,T!=w,u)]
'update inventory set _=B,n=v where o-p*k-J>T'
'select s from inventory where w4g4<.m(_)/_>t'
> Tainted[(w4g4<.m(_)/_>t)]
###Markdown
One can see that `insert`, `update`, `select` and `delete` statements on an existing table lead to taint exceptions. We can now focus on these specific kinds of inputs. However, this is not the only thing we can do. We will see how we can identify specific portions of input that reached tainted execution using character origins in the later sections. But before that, we explore other uses of taints. Preventing Privacy LeaksUsing taints, we can also ensure that secret information does not leak out. We can assign a special taint `"SECRET"` to strings whose information must not leak out:
###Code
secrets = tstr('<Plenty of secret keys>', taint='SECRET')
###Output
_____no_output_____
###Markdown
Accessing any substring of `secrets` will propagate the taint:
###Code
secrets[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
Consider the _heartbeat_ security leak from [the chapter on Fuzzing](Fuzzer.ipynb), in which a server would accidentally reply not only the user input sent to it, but also secret memory. If the reply consists only of the user input, there is no taint associated with it:
###Code
user_input = "hello"
reply = user_input
isinstance(reply, tstr)
###Output
_____no_output_____
###Markdown
If, however, the reply contains _any_ part of the secret, the reply will be tainted:
###Code
reply = user_input + secrets[0:5]
reply
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
The output function of our server would now ensure that the data sent back does not contain any secret information:
###Code
def send_back(s):
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
...
with ExpectError():
send_back(reply)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_15398/3747050841.py", line 2, in <module>
send_back(reply)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_15398/3158733057.py", line 2, in send_back
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
AssertionError (expected)
###Markdown
Our `tstr` solution can help to identify information leaks – but it is by no means complete. If we actually take the `heartbeat()` implementation from [the chapter on Fuzzing](Fuzzer.ipynb), we will see that _any_ reply is marked as `SECRET` – even those not even accessing secret memory:
###Code
from Fuzzer import heartbeat
reply = heartbeat('hello', 5, memory=secrets)
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
Why is this? If we look into the implementation of `heartbeat()`, we will see that it first builds a long string `memory` from the (non-secret) reply and the (secret) memory, before returning the first characters from `memory`.```python Store reply in memory memory = reply + memory[len(reply):]```At this point, the whole memory still is tainted as `SECRET`, _including_ the non-secret part from `reply`. We may be able to circumvent the issue by tagging the `reply` as `PUBLIC` – but then, this taint would be in conflict with the `SECRET` tag of `memory`. What happens if we compose a string from two differently tainted strings?
###Code
thilo = tstr("High", taint='HIGH') + tstr("Low", taint='LOW')
###Output
_____no_output_____
###Markdown
It turns out that in this case, the `__add__()` method takes precedence over the `__radd__()` method, which means that the right-hand `"Low"` string is treated as a regular (non-tainted) string.
###Code
thilo
thilo.taint # type: ignore
###Output
_____no_output_____
###Markdown
We could set up the `__add__()` and other methods with special handling for conflicting taints. However, the way this conflict should be resolved would be highly _application-dependent_:* If we use taints to indicate _privacy levels_, `SECRET` privacy should take precedence over `PUBLIC` privacy. Any combination of a `SECRET`-tainted string and a `PUBLIC`-tainted string thus should have a `SECRET` taint.* If we use taints to indicate _origins_ of information, an `UNTRUSTED` origin should take precedence over a `TRUSTED` origin. Any combination of an `UNTRUSTED`-tainted string and a `TRUSTED`-tainted string thus should have an `UNTRUSTED` taint.Of course, such conflict resolutions can be implemented. But even so, they will not help us in the `heartbeat()` example differentiating secret from non-secret output data. Tracking Individual CharactersFortunately, there is a better, more generic way to solve the above problems. The key to composition of differently tainted strings is to assign taints not only to strings, but actually to every bit of information – in our case, characters. If every character has a taint on its own, a new composition of characters will simply inherit this very taint _per character_. To this end, we introduce a second bit of information named _origin_. Distinguishing various untrusted sources may be accomplished by origining each instance as separate instance (called *colors* in dynamic origin research). You will see an instance of this technique in the chapter on [Grammar Mining](GrammarMiner.ipynb). In this section, we carry *character level* origins. That is, given a fragment that resulted from a portion of the original origined string, one will be able to tell which portion of the input string the fragment was taken from. In essence, each input character index from an origined source gets its own color. More complex origining such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter. A Class for Tracking Character OriginsLet us introduce a class `ostr` which, like `tstr`, carries a taint for each string, and additionally an _origin_ for each character that indicates its source. It is a consecutive number in a particular range (by default, starting with zero) indicating its _position_ within a specific origin.
###Code
class ostr(str):
"""Wrapper for strings, saving taint and origin information"""
DEFAULT_ORIGIN = 0
def __new__(cls, value, *args, **kw):
"""Create an ostr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None,
origin: Optional[Union[int, List[int]]] = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `ostr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings.
`origin` (optional) is either
- an integer denoting the index of the first character in `value`, or
- a list of integers denoting the origins of the characters in `value`,
"""
self.taint = taint
if origin is None:
origin = ostr.DEFAULT_ORIGIN
if isinstance(origin, int):
self.origin = list(range(origin, origin + len(self)))
else:
self.origin = origin
assert len(self.origin) == len(self)
###Output
_____no_output_____
###Markdown
As with `tstr`, above, we implement methods for conversion into (regular) Python strings:
###Code
class ostr(ostr):
def create(self, s):
return ostr(s, taint=self.taint, origin=self.origin)
class ostr(ostr):
UNKNOWN_ORIGIN = -1
def __repr__(self):
# handle escaped chars
origin = [ostr.UNKNOWN_ORIGIN]
for s, o in zip(str(self), self.origin):
origin.extend([o] * (len(repr(s)) - 2))
origin.append(ostr.UNKNOWN_ORIGIN)
return ostr(str.__repr__(self), taint=self.taint, origin=origin)
class ostr(ostr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
By default, character origins start with `0`:
###Code
othello = ostr('hello')
assert othello.origin == [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
We can also specify the starting origin as below -- `6..10`
###Code
tworld = ostr('world', origin=6)
assert tworld.origin == [6, 7, 8, 9, 10]
a = ostr("hello\tworld")
repr(a).origin # type: ignore
###Output
_____no_output_____
###Markdown
`str()` returns a `str` instance without origin or taint information:
###Code
assert type(str(othello)) == str
###Output
_____no_output_____
###Markdown
`repr()`, however, keeps the origin information for the original string:
###Code
repr(othello)
repr(othello).origin # type: ignore
###Output
_____no_output_____
###Markdown
Just as with taints, we can clear origins and check whether an origin is present:
###Code
class ostr(ostr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
class ostr(ostr):
def clear_origin(self):
self.origin = [self.UNKNOWN_ORIGIN] * len(self)
return self
def has_origin(self):
return any(origin != self.UNKNOWN_ORIGIN for origin in self.origin)
othello = ostr('Hello')
assert othello.has_origin()
othello.clear_origin()
assert not othello.has_origin()
###Output
_____no_output_____
###Markdown
In the remainder of this section, we re-implement various string methods such that they also keep track of origins. If this is too tedious for you, jump right [to the next section](Checking-Origins) which gives a number of usage examples. Excursion: Implementing String Methods CreateWe need to create new substrings that are wrapped in `ostr` objects. However, we also want to allow our subclasses to create their own instances. Hence we again provide a `create()` method that produces a new `ostr` instance.
###Code
class ostr(ostr):
def create(self, res, origin=None):
return ostr(res, taint=self.taint, origin=origin)
othello = ostr('hello', taint='HIGH')
otworld = othello.create('world', origin=6)
otworld.origin
otworld.taint
assert (othello.origin, otworld.origin) == (
[0, 1, 2, 3, 4], [6, 7, 8, 9, 10])
###Output
_____no_output_____
###Markdown
IndexIn Python, indexing is provided through `__getitem__()`. Indexing on positive integers is simple enough. However, it has two additional wrinkles. The first is that, if the index is negative, that many characters are counted from the end of the string which lies just after the last character. That is, the last character has a negative index `-1`
###Code
class ostr(ostr):
def __getitem__(self, key):
res = super().__getitem__(key)
if isinstance(key, int):
key = len(self) + key if key < 0 else key
return self.create(res, [self.origin[key]])
elif isinstance(key, slice):
return self.create(res, self.origin[key])
else:
assert False
ohello = ostr('hello', taint='HIGH')
assert (ohello[0], ohello[-1]) == ('h', 'o')
ohello[0].taint
###Output
_____no_output_____
###Markdown
The other wrinkle is that `__getitem__()` can accept a slice. We discuss this next. SlicesThe Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method, which returns a custom `iterator`.
###Code
class ostr(ostr):
def __iter__(self):
return ostr_iterator(self)
###Output
_____no_output_____
###Markdown
The `__iter__()` method requires a supporting `iterator` object. The `iterator` is used to save the state of the current iteration, which it does by keeping a reference to the original `ostr`, and the current index of iteration `_str_idx`.
###Code
class ostr_iterator():
def __init__(self, ostr):
self._ostr = ostr
self._str_idx = 0
def __next__(self):
if self._str_idx == len(self._ostr):
raise StopIteration
# calls ostr getitem should be ostr
c = self._ostr[self._str_idx]
assert isinstance(c, ostr)
self._str_idx += 1
return c
###Output
_____no_output_____
###Markdown
Bringing all these together:
###Code
thw = ostr('hello world', taint='HIGH')
thw[0:5]
assert thw[0:5].has_taint()
assert thw[0:5].has_origin()
thw[0:5].taint
thw[0:5].origin
###Output
_____no_output_____
###Markdown
Splits
###Code
def make_split_wrapper(fun):
def proxy(self, *args, **kwargs):
lst = fun(self, *args, **kwargs)
return [self.create(elem) for elem in lst]
return proxy
for name in ['split', 'rsplit', 'splitlines']:
fun = getattr(str, name)
setattr(ostr, name, make_split_wrapper(fun))
othello = ostr('hello world', taint='LOW')
othello == 'hello world'
othello.split()[0].taint # type: ignore
###Output
_____no_output_____
###Markdown
(Exercise for the reader: handle _partitions_, i.e., splitting a string by substrings) ConcatenationIf two origined strings are concatenated together, it may be desirable to transfer the origins from each to the corresponding portion of the resulting string. The concatenation of strings is accomplished by overriding `__add__()`.
###Code
class ostr(ostr):
def __add__(self, other):
if isinstance(other, ostr):
return self.create(str.__add__(self, other),
(self.origin + other.origin))
else:
return self.create(str.__add__(self, other),
(self.origin + [self.UNKNOWN_ORIGIN for i in other]))
###Output
_____no_output_____
###Markdown
Testing concatenations between two `ostr` instances:
###Code
othello = ostr("hello")
otworld = ostr("world", origin=6)
othw = othello + otworld
assert othw.origin == [0, 1, 2, 3, 4, 6, 7, 8, 9, 10] # type: ignore
###Output
_____no_output_____
###Markdown
What if a `ostr` is concatenated with a `str`?
###Code
space = " "
th_w = othello + space + otworld
assert th_w.origin == [
0,
1,
2,
3,
4,
ostr.UNKNOWN_ORIGIN,
ostr.UNKNOWN_ORIGIN,
6,
7,
8,
9,
10]
###Output
_____no_output_____
###Markdown
One wrinkle here is that when adding a `ostr` and a `str`, the user may place the `str` first, in which case, the `__add__()` method will be called on the `str` instance. Not on the `ostr` instance. However, Python provides a solution. If one defines `__radd__()` on the `ostr` instance, that method will be called rather than `str.__add__()`
###Code
class ostr(ostr):
def __radd__(self, other):
origin = other.origin if isinstance(other, ostr) else [
self.UNKNOWN_ORIGIN for i in other]
return self.create(str.__add__(other, self), (origin + self.origin))
###Output
_____no_output_____
###Markdown
We test it out:
###Code
shello = "hello"
otworld = ostr("world")
thw = shello + otworld
assert thw.origin == [ostr.UNKNOWN_ORIGIN] * len(shello) + [0, 1, 2, 3, 4] # type: ignore
###Output
_____no_output_____
###Markdown
These methods: `slicing` and `concatenation` is sufficient to implement other string methods that result in a string, and does not change the character underneath (i.e no case change). Hence, we look at a helper method next. Extract Origin StringGiven a specific input index, the method `x()` extracts the corresponding origined portion from a `ostr`. As a convenience it supports `slices` along with `ints`.
###Code
class ostr(ostr):
class TaintException(Exception):
pass
def x(self, i=0):
"""Extract substring at index/slice `i`"""
if not self.origin:
raise origin.TaintException('Invalid request idx')
if isinstance(i, int):
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j == i]]
elif isinstance(i, slice):
r = range(i.start or 0, i.stop or len(self), i.step or 1)
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j in r]]
thw = ostr('hello world', origin=100)
assert thw.x(101) == ['e']
assert thw.x(slice(101, 105)) == ['e', 'l', 'l', 'o']
###Output
_____no_output_____
###Markdown
Replace The `replace()` method replaces a portion of the string with another.
###Code
class ostr(ostr):
def replace(self, a, b, n=None):
old_origin = self.origin
b_origin = b.origin if isinstance(
b, ostr) else [self.UNKNOWN_ORIGIN] * len(b)
mystr = str(self)
i = 0
while True:
if n and i >= n:
break
idx = mystr.find(a)
if idx == -1:
break
last = idx + len(a)
mystr = mystr.replace(a, b, 1)
partA, partB = old_origin[0:idx], old_origin[last:]
old_origin = partA + b_origin + partB
i += 1
return self.create(mystr, old_origin)
my_str = ostr("aa cde aa")
res = my_str.replace('aa', 'bb')
assert res, res.origin == ('bb', 'cde', 'bb',
[ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN,
2, 3, 4, 5, 6,
ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN])
my_str = ostr("aa cde aa")
res = my_str.replace('aa', ostr('bb', origin=100))
assert (
res, res.origin) == (
('bb cde bb'), [
100, 101, 2, 3, 4, 5, 6, 100, 101])
###Output
_____no_output_____
###Markdown
Split We essentially have to re-implement split operations, and split by space is slightly different from other splits.
###Code
class ostr(ostr):
def _split_helper(self, sep, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = len(sep)
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
first_idx = last_idx + sep_len
return result_list
def _split_space(self, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = 0
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
v = str(self[last_idx:])
sep_len = len(v) - len(v.lstrip(' '))
first_idx = last_idx + sep_len
return result_list
def rsplit(self, sep=None, maxsplit=-1):
splitted = super().rsplit(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
def split(self, sep=None, maxsplit=-1):
splitted = super().split(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
my_str = ostr('ab cdef ghij kl')
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([0, 1], [3, 4, 5, 6], [8, 9, 10, 11], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 15)))
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert(ab.origin, cdef.origin, kl.origin) == ([0, 1], [3, 4, 5, 6], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=100, taint='HIGH')
ab, cdef, ghij, kl = my_str.rsplit()
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([100, 101], [105, 106, 107, 108], [110, 111, 112, 113],
[118, 119])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 20)), taint='HIGH')
ab, cdef, ghij, kl = my_str.split()
assert (ab.origin, cdef.origin, kl.origin) == ([0, 1], [5, 6, 7, 8], [18, 19])
assert ab.taint == 'HIGH'
###Output
_____no_output_____
###Markdown
Strip
###Code
class ostr(ostr):
def strip(self, cl=None):
return self.lstrip(cl).rstrip(cl)
def lstrip(self, cl=None):
res = super().lstrip(cl)
i = self.find(res)
return self[i:]
def rstrip(self, cl=None):
res = super().rstrip(cl)
return self[0:len(res)]
my_str1 = ostr(" abc ")
v = my_str1.strip()
assert v, v.origin == ('abc', [2, 3, 4])
my_str1 = ostr(" abc ")
v = my_str1.lstrip()
assert (v, v.origin) == ('abc ', [2, 3, 4, 5, 6])
my_str1 = ostr(" abc ")
v = my_str1.rstrip()
assert (v, v.origin) == (' abc', [0, 1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
Expand Tabs
###Code
class ostr(ostr):
def expandtabs(self, n=8):
parts = self.split('\t')
res = super().expandtabs(n)
all_parts = []
for i, p in enumerate(parts):
all_parts.extend(p.origin)
if i < len(parts) - 1:
l = len(all_parts) % n
all_parts.extend([p.origin[-1]] * l)
return self.create(res, all_parts)
my_s = str("ab\tcd")
my_ostr = ostr("ab\tcd")
v1 = my_s.expandtabs(4)
v2 = my_ostr.expandtabs(4)
assert str(v1) == str(v2)
assert (len(v1), repr(v2), v2.origin) == (6, "'ab cd'", [0, 1, 1, 1, 3, 4])
class ostr(ostr):
def join(self, iterable):
mystr = ''
myorigin = []
sep_origin = self.origin
lst = list(iterable)
for i, s in enumerate(lst):
sorigin = s.origin if isinstance(s, ostr) else [
self.UNKNOWN_ORIGIN] * len(s)
myorigin.extend(sorigin)
mystr += str(s)
if i < len(lst) - 1:
myorigin.extend(sep_origin)
mystr += str(self)
res = super().join(iterable)
assert len(res) == len(mystr)
return self.create(res, myorigin)
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr('').join([v2, v3, v1])
assert (
v4, v4.origin) == (
'cdefab', [
103, 104, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 100, 101])
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr(',').join([v2, v3, v1])
assert (v4, v4.origin) == ('cd,ef,ab',
[103, 104, 0, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 0, 100, 101]) # type: ignore
###Output
_____no_output_____
###Markdown
Partitions
###Code
class ostr(ostr):
def partition(self, sep):
partA, sep, partB = super().partition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
def rpartition(self, sep):
partA, sep, partB = super().rpartition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
###Output
_____no_output_____
###Markdown
Justify
###Code
class ostr(ostr):
def ljust(self, width, fillchar=' '):
res = super().ljust(width, fillchar)
initial = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, [t] * initial + self.origin)
class ostr(ostr):
def rjust(self, width, fillchar=' '):
res = super().rjust(width, fillchar)
final = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, self.origin + [t] * final)
###Output
_____no_output_____
###Markdown
mod
###Code
class ostr(ostr):
def __mod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
s_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = self.find('%s')
assert i >= 0
res = super().__mod__(s)
r_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
class ostr(ostr):
def __rmod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
r_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = s.find('%s')
assert i >= 0
res = super().__rmod__(s)
s_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
a = ostr('hello %s world', origin=100)
a
(a % 'good').origin
b = 'hello %s world'
c = ostr('bad', origin=10)
(b % c).origin
###Output
_____no_output_____
###Markdown
String methods that do not change origin
###Code
class ostr(ostr):
def swapcase(self):
return self.create(str(self).swapcase(), self.origin)
def upper(self):
return self.create(str(self).upper(), self.origin)
def lower(self):
return self.create(str(self).lower(), self.origin)
def capitalize(self):
return self.create(str(self).capitalize(), self.origin)
def title(self):
return self.create(str(self).title(), self.origin)
a = ostr('aa', origin=100).upper()
a, a.origin
###Output
_____no_output_____
###Markdown
General wrappers These are not strictly needed for operation, but can be useful for tracing.
###Code
def make_basic_str_wrapper(fun): # type: ignore
def proxy(*args, **kwargs):
res = fun(*args, **kwargs)
return res
return proxy
import inspect
import types
def informationflow_init_2():
ostr_members = [name for name, fn in inspect.getmembers(ostr, callable)
if isinstance(fn, types.FunctionType) and fn.__qualname__.startswith('ostr')]
for name, fn in inspect.getmembers(str, callable):
if name not in set(['__class__', '__new__', '__str__', '__init__',
'__repr__', '__getattribute__']) | set(ostr_members):
setattr(ostr, name, make_basic_str_wrapper(fn))
informationflow_init_2()
INITIALIZER_LIST.append(informationflow_init_2)
###Output
_____no_output_____
###Markdown
Methods yet to be translated These methods generate strings from other strings. However, we do not have the right implementations for any of these. Hence these are marked as dangerous until we can generate the right translations.
###Code
def make_str_abort_wrapper(fun):
def proxy(*args, **kwargs):
raise ostr.TaintException(
'%s Not implemented in `ostr`' %
fun.__name__)
return proxy
def informationflow_init_3():
for name, fn in inspect.getmembers(str, callable):
# Omitted 'splitlines' as this is needed for formatting output in
# IPython/Jupyter
if name in ['__format__', 'format_map', 'format',
'__mul__', '__rmul__', 'center', 'zfill', 'decode', 'encode']:
setattr(ostr, name, make_str_abort_wrapper(fn))
informationflow_init_3()
INITIALIZER_LIST.append(informationflow_init_3)
###Output
_____no_output_____
###Markdown
While generating proxy wrappers for string operations can handle most common cases of transmission of information flow, some of the operations involving strings can not be overridden. For example, consider the following. End of Excursion Checking OriginsWith all this implemented, we now have full-fledged `ostr` strings where we can easily check the origin of each and every character. To check whether a string originates from another string, we can convert the origin to a set and resort to standard set operations:
###Code
s = ostr("hello", origin=100)
s[1]
s[1].origin
set(s[1].origin) <= set(s.origin)
t = ostr("world", origin=200)
set(s.origin) <= set(t.origin)
u = s + t + "!"
u.origin
ostr.UNKNOWN_ORIGIN in u.origin
###Output
_____no_output_____
###Markdown
Privacy Leaks RevisitedLet us apply it to see whether we can come up with a satisfactory solution for checking the `heartbeat()` function against information leakage.
###Code
SECRET_ORIGIN = 1000
###Output
_____no_output_____
###Markdown
We define a "secret" that must not leak out:
###Code
secret = ostr('<again, some super-secret input>', origin=SECRET_ORIGIN)
###Output
_____no_output_____
###Markdown
Each and every character in `secret` has an origin starting with `SECRET_ORIGIN`:
###Code
print(secret.origin)
###Output
[1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031]
###Markdown
If we now invoke `heartbeat()` with a given string, the origin of the reply should all be `UNKNOWN_ORIGIN` (from the input), and none of the characters should have a `SECRET_ORIGIN`.
###Code
hello_s = heartbeat('hello', 5, memory=secret)
hello_s
assert isinstance(hello_s, ostr)
print(hello_s.origin)
###Output
[-1, -1, -1, -1, -1]
###Markdown
We can verify that the secret did not leak out by formulating appropriate assertions:
###Code
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
_____no_output_____
###Markdown
All assertions pass, again confirming that no secret leaked out. Let us now go and exploit `heartbeat()` to reveal its secrets. As `heartbeat()` is unchanged, it is as vulnerable as it was:
###Code
hello_s = heartbeat('hello', 32, memory=secret)
hello_s
###Output
_____no_output_____
###Markdown
Now, however, the reply _does_ contain secret information:
###Code
assert isinstance(hello_s, ostr)
print(hello_s.origin)
with ExpectError():
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
with ExpectError():
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
with ExpectError():
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_15398/1577803914.py", line 2, in <module>
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
AssertionError (expected)
###Markdown
We can now integrate these assertions into the `heartbeat()` function, causing it to fail before leaking information. Additionally (or alternatively?), we can also rewrite our output functions not to give out any secret information. We will leave these two exercises for the reader. Taint-Directed FuzzingThe previous _Taint Aware Fuzzing_ was a bit unsatisfactory in that we could not focus on the specific parts of the grammar that led to dangerous operations. We fix that with _taint directed fuzzing_ using `TrackingDB`.The idea here is to track the origins of each character that reaches `eval`. Then, track it back to the grammar nodes that generated it, and increase the probability of using those nodes again. TrackingDBThe `TrackingDB` is similar to `TaintedDB`. The difference is that, if we find that the execution has reached the `my_eval`, we simply raise the `Tainted`.
###Code
class TrackingDB(TaintedDB):
def my_eval(self, statement, g, l):
if statement.origin:
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
Next, we need a specially crafted fuzzer that preserves the taints. TaintedGrammarFuzzerWe define a `TaintedGrammarFuzzer` class that ensures that the taints propagate to the derivation tree. This is similar to the `GrammarFuzzer` from the [chapter on grammar fuzzers](GrammarFuzzer.ipynb) except that the origins and taints are preserved.
###Code
import random
from GrammarFuzzer import GrammarFuzzer
from Parser import canonical
class TaintedGrammarFuzzer(GrammarFuzzer):
def __init__(self,
grammar,
start_symbol=START_SYMBOL,
expansion_switch=1,
log=False):
self.tainted_start_symbol = ostr(
start_symbol, origin=[1] * len(start_symbol))
self.expansion_switch = expansion_switch
self.log = log
self.grammar = grammar
self.c_grammar = canonical(grammar)
self.init_tainted_grammar()
def expansion_cost(self, expansion, seen=set()):
symbols = [e for e in expansion if e in self.c_grammar]
if len(symbols) == 0:
return 1
if any(s in seen for s in symbols):
return float('inf')
return sum(self.symbol_cost(s, seen) for s in symbols) + 1
def fuzz_tree(self):
tree = (self.tainted_start_symbol, [])
nt_leaves = [tree]
expansion_trials = 0
while nt_leaves:
idx = random.randint(0, len(nt_leaves) - 1)
key, children = nt_leaves[idx]
expansions = self.ct_grammar[key]
if expansion_trials < self.expansion_switch:
expansion = random.choice(expansions)
else:
costs = [self.expansion_cost(e) for e in expansions]
m = min(costs)
all_min = [i for i, c in enumerate(costs) if c == m]
expansion = expansions[random.choice(all_min)]
new_leaves = [(token, []) for token in expansion]
new_nt_leaves = [e for e in new_leaves if e[0] in self.ct_grammar]
children[:] = new_leaves
nt_leaves[idx:idx + 1] = new_nt_leaves
if self.log:
print("%-40s" % (key + " -> " + str(expansion)))
expansion_trials += 1
return tree
def fuzz(self):
self.derivation_tree = self.fuzz_tree()
return self.tree_to_string(self.derivation_tree)
###Output
_____no_output_____
###Markdown
We use a specially prepared tainted grammar for fuzzing. We mark each individual definition, each individual rule, and each individual token with a separate origin (we chose a token boundary of 10 here, after inspecting the grammar). This allows us to track exactly which parts of the grammar were involved in the operations we are interested in.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def init_tainted_grammar(self):
key_increment, alt_increment, token_increment = 1000, 100, 10
key_origin = key_increment
self.ct_grammar = {}
for key, val in self.c_grammar.items():
key_origin += key_increment
os = []
for v in val:
ts = []
key_origin += alt_increment
for t in v:
nt = ostr(t, origin=key_origin)
key_origin += token_increment
ts.append(nt)
os.append(ts)
self.ct_grammar[key] = os
# a use tracking grammar
self.ctp_grammar = {}
for key, val in self.ct_grammar.items():
self.ctp_grammar[key] = [(v, dict(use=0)) for v in val]
###Output
_____no_output_____
###Markdown
As before, we initialize the `TrackingDB`
###Code
trdb = TrackingDB(db.db)
###Output
_____no_output_____
###Markdown
Finally, we need to ensure that the taints are preserved, when the tree is converted back to a string. For this, we define the `tainted_tree_to_string()`
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def tree_to_string(self, tree):
symbol, children, *_ = tree
e = ostr('')
if children:
return e.join([self.tree_to_string(c) for c in children])
else:
return e if symbol in self.c_grammar else symbol
###Output
_____no_output_____
###Markdown
We define `update_grammar()` that accepts a set of origins that reached the dangerous operations and the derivation tree of the original string used for fuzzing to update the enhanced grammar.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def update_grammar(self, origin, dtree):
def update_tree(dtree, origin):
key, children = dtree
if children:
updated_children = [update_tree(c, origin) for c in children]
corigin = set.union(
*[o for (key, children, o) in updated_children])
corigin = corigin.union(set(key.origin))
return (key, children, corigin)
else:
my_origin = set(key.origin).intersection(origin)
return (key, [], my_origin)
key, children, oset = update_tree(dtree, set(origin))
for key, alts in self.ctp_grammar.items():
for alt, o in alts:
alt_origins = set([i for token in alt for i in token.origin])
if alt_origins.intersection(oset):
o['use'] += 1
###Output
_____no_output_____
###Markdown
With these, we are now ready to fuzz.
###Code
def tree_type(tree):
key, children = tree
return (type(key), key, [tree_type(c) for c in children])
tgf = TaintedGrammarFuzzer(INVENTORY_GRAMMAR_F)
x = None
for _ in range(10):
qtree = tgf.fuzz_tree()
query = tgf.tree_to_string(qtree)
assert isinstance(query, ostr)
try:
print(repr(query))
res = trdb.sql(query)
print(repr(res))
except SQLException as e:
print(e)
except Tainted as e:
print(e)
origin = e.args[0].origin
tgf.update_grammar(origin, qtree)
except:
traceback.print_exc()
break
print()
###Output
'select (g!=(9)!=((:)==2==9)!=J)==-7 from inventory'
Tainted[((g!=(9)!=((:)==2==9)!=J)==-7)]
'delete from inventory where ((c)==T)!=5==(8!=Y)!=-5'
Tainted[((c)==T)!=5==(8!=Y)!=-5]
'select (((w==(((X!=------8)))))) from inventory'
Tainted[((((w==(((X!=------8)))))))]
'delete from inventory where ((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))'
Tainted[((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))]
'delete from inventory where ((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))'
Tainted[((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))]
'delete from inventory where o!=2==---5==3!=t'
Tainted[o!=2==---5==3!=t]
'select (2) from inventory'
Tainted[((2))]
'select _ from inventory'
Tainted[(_)]
'select L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)) from inventory'
Tainted[(L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)))]
'delete from inventory where _==(7==(9)!=(---5)==1)==-8'
Tainted[_==(7==(9)!=(---5)==1)==-8]
###Markdown
We can now inspect our enhanced grammar to see how many times each rule was used.
###Code
tgf.ctp_grammar
###Output
_____no_output_____
###Markdown
From here, the idea is to focus on the rules that reached dangerous operations more often, and increase the probability of the values of that kind. The Limits of Taint TrackingWhile our framework can detect information leakage, it is by no means perfect. There are several ways in which taints can get lost and information thus may still leak out. ConversionsWe only track taints and origins through _strings_ and _characters_. If we convert these to numbers (or other data), the information is lost. As an example, consider this function, converting individual characters to numbers and back:
###Code
def strip_all_info(s):
t = ""
for c in s:
t += chr(ord(c))
return t
othello = ostr("Secret")
othello
othello.origin # type: ignore
###Output
_____no_output_____
###Markdown
The taints and origins will not propagate through the number conversion:
###Code
thello_stripped = strip_all_info(thello)
thello_stripped
with ExpectError():
thello_stripped.origin
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_15398/588526133.py", line 2, in <module>
thello_stripped.origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
This issue could be addressed by extending numbers with taints and origins, just as we did for strings. At some point, however, this will still break down, because as soon as an internal C function in the Python library is reached, the taint will not propagate into and across the C function. (Unless one starts implementing dynamic taints for these, that is.) Internal C libraries As we mentioned before, calls to _internal_ C libraries do not propagate taints. For example, while the following preserves the taints,
###Code
hello = ostr('hello', origin=100)
world = ostr('world', origin=200)
(hello + ' ' + world).origin
###Output
_____no_output_____
###Markdown
a call to a `join` that should be equivalent will fail.
###Code
with ExpectError():
''.join([hello, ' ', world]).origin # type: ignore
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_15398/2341342688.py", line 2, in <module>
''.join([hello, ' ', world]).origin # type: ignore
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
Implicit Information FlowEven if one could taint all data in a program, there still would be means to break information flow – notably by turning explicit flow into _implicit_ flow, or data flow into _control flow_. Here is an example:
###Code
def strip_all_info_again(s):
t = ""
for c in s:
if c == 'a':
t += 'a'
elif c == 'b':
t += 'b'
elif c == 'c':
t += 'c'
...
###Output
_____no_output_____
###Markdown
With such a function, there is no explicit data flow between the characters in `s` and the characters in `t`; yet, the strings would be identical. This problem frequently occurs in programs that process and manipulate external input. Enforcing TaintingBoth, conversions and implicit information flow are one of several possibilities how taint and origin information get lost. To address the problem, the best solution is to _always assume the worst from untainted strings_:* As it comes to trust, an untainted string should be treated as _possibly untrusted_, and hence not relied upon unless sanitized.* As it comes to privacy, an untainted string should be treated as _possibly secret_, and hence not leaked out.As a consequence, your program should always have two kinds of taints: one for explicitly trusted (or secret) and one for explicitly untrusted (or non-secret). If a taint gets lost along the way, you will may have to restore it from its sources – not unlike the string methods discussed above. The benefit is a trusted application, in which each and every information flow can be checked at runtime, with violations quickly discovered through automated tests. Synopsis This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:
###Code
thello[:4]
###Output
_____no_output_____
###Markdown
However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:
###Code
thello.taint
###Output
_____no_output_____
###Markdown
The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:
###Code
thello[1:2].taint # type: ignore
###Output
_____no_output_____
###Markdown
`tstr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
from ClassDiagram import display_class_hierarchy
display_class_hierarchy(tstr)
###Output
_____no_output_____
###Markdown
Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:
###Code
secret = ostr("joshua1234", origin=100, taint='SECRET')
###Output
_____no_output_____
###Markdown
The `origin` attribute of an `ostr` provides access to a list of indexes:
###Code
secret.origin
secret.taint
###Output
_____no_output_____
###Markdown
`ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:
###Code
secret_substr = (secret[0:4] + "-" + secret[6:])
secret_substr.taint
secret_substr.origin
###Output
_____no_output_____
###Markdown
`ostr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
display_class_hierarchy(ostr)
###Output
_____no_output_____
###Markdown
Lessons Learned* String-based and character-based taints allow to dynamically track the information flow from input to the internals of a system and back to the output.* Checking taints allows to discover untrusted inputs and information leakage at runtime.* Data conversions and implicit data flow may strip taint information; the resulting untainted strings should be treated as having the worst possible taint.* Taints can be used in conjunction with fuzzing to provide a more robust indication of incorrect behavior than to simply rely on program crashes. Next StepsAn even better alternative to our taint-directed fuzzing is to make use of _symbolic_ techniques that take the semantics of the program under test into account. The chapter on [flow fuzzing](FlowFuzzer.ipynb) introduces these symbolic techniques for the purpose of exploring information flows; the subsequent chapter on [symbolic fuzzing](SymbolicFuzzer.ipynb) then shows how to make full-fledged use of symbolic execution for covering code. Similarly, [search based fuzzing](SearchBasedFuzzer.ipynb) can often provide a cheaper exploration strategy. BackgroundTaint analysis on Python using a library approach as we implemented in this chapter was discussed by Conti et al. \cite{Conti2010}. Exercises Exercise 1: Tainted NumbersIntroduce a class `tint` (for tainted integer) that, like `tstr`, has a taint attribute that gets passed on from `tint` to `tint`. Part 1: CreationImplement the `tint` class such that taints are set:```pythonx = tint(42, taint='SECRET')assert x.taint == 'SECRET'``` **Solution.** This is pretty straightforward, as we can apply the same scheme as for `tstr`:
###Code
class tint(int):
def __new__(cls, value, *args, **kw):
return int.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
x = tint(42, taint='SECRET')
assert x.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 2: Arithmetic expressionsEnsure that taints get passed along arithmetic expressions; support addition, subtraction, multiplication, and division operators.```pythony = x + 1assert y.taint == 'SECRET'``` **Solution.** As with `tstr`, we implement a `create()` method and a convenience function to quickly define all arithmetic operations:
###Code
class tint(tint):
def create(self, n):
# print("New tint from", n)
return tint(n, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_int_wrapper()` function creates a wrapper around an existing `int` method which attaches the taint to the result of the method:
###Code
def make_int_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
# print(fun, args, kwargs, "=", repr(res))
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all arithmetic operators:
###Code
for name in ['__add__', '__radd__', '__mul__', '__rmul__', '__sub__',
'__floordiv__', '__truediv__']:
fun = getattr(int, name)
setattr(tint, name, make_int_wrapper(fun))
x = tint(42, taint='SECRET')
y = x + 1
y.taint # type: ignore
###Output
_____no_output_____
###Markdown
Part 3: Passing taints from integers to stringsConverting a tainted integer into a string (using `repr()`) should yield a tainted string:```pythonx_s = repr(x)assert x_s.taint == 'SECRET'``` **Solution.** We define the string conversion functions such that they return a tainted string (`tstr`):
###Code
class tint(tint):
def __repr__(self) -> tstr:
s = int.__repr__(self)
return tstr(s, taint=self.taint)
class tint(tint):
def __str__(self) -> tstr:
return tstr(int.__str__(self), taint=self.taint)
x = tint(42, taint='SECRET')
x_s = repr(x)
assert isinstance(x_s, tstr)
assert x_s.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 4: Passing taints from strings to integersConverting a tainted object (with a `taint` attribute) to an integer should pass that taint:```pythonpassword = tstr('1234', taint='NOT_EXACTLY_SECRET')x = tint(password)assert x == 1234assert x.taint == 'NOT_EXACTLY_SECRET'``` **Solution.** This can be done by having the `__init__()` constructor check for a `taint` attibute:
###Code
class tint(tint):
def __init__(self, value, taint=None, **kwargs):
if taint is not None:
self.taint = taint
else:
self.taint = getattr(value, 'taint', None)
password = tstr('1234', taint='NOT_EXACTLY_SECRET')
x = tint(password)
assert x == 1234
assert x.taint == 'NOT_EXACTLY_SECRET'
###Output
_____no_output_____
###Markdown
Tracking Information FlowWe have explored how one could generate better inputs that can penetrate deeper into the program in question. While doing so, we have relied on program crashes to tell us that we have succeeded in finding problems in the program. However, that is rather simplistic. What if the behavior of the program is simply incorrect, but does not lead to a crash? Can one do better?In this chapter, we explore in depth how to track information flows in Python, and how these flows can be used to determine whether a program behaved as expected.
###Code
from bookutils import YouTubeVideo
YouTubeVideo('MJ0VGzVbhYc')
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [chapter on coverage](Coverage.ipynb).* You should have read the [chapter on probabilistic fuzzing](ProbabilisticGrammarFuzzer.ipynb). We first set up our infrastructure so that we can make use of previously defined functions.
###Code
import bookutils
from typing import List, Any, Optional, Union
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.InformationFlow import ```and then make use of the following features.This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:```python>>> thello = tstr('hello', taint='LOW')```A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:```python>>> thello[:4]'hell'```However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:```python>>> thello.taint'LOW'```The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:```python>>> thello[1:2].taint type: ignore'LOW'````tstr` objects duplicate most `str` methods, as indicated in the class diagram: Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:```python>>> secret = ostr("joshua1234", origin=100, taint='SECRET')```The `origin` attribute of an `ostr` provides access to a list of indexes:```python>>> secret.origin[100, 101, 102, 103, 104, 105, 106, 107, 108, 109]>>> secret.taint'SECRET'````ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:```python>>> secret_substr = (secret[0:4] + "-" + secret[6:])>>> secret_substr.taint'SECRET'>>> secret_substr.origin[100, 101, 102, 103, -1, 106, 107, 108, 109]````ostr` objects duplicate most `str` methods, as indicated in the class diagram: A Vulnerable DatabaseSay we want to implement an *in-memory database* service in Python. Here is a rather flimsy attempt. We use the following dataset.
###Code
INVENTORY = """\
1997,van,Ford,E350
2000,car,Mercury,Cougar
1999,car,Chevy,Venture\
"""
VEHICLES = INVENTORY.split('\n')
###Output
_____no_output_____
###Markdown
Our DB is a Python class that parses its arguments and throws `SQLException` which is defined below.
###Code
class SQLException(Exception):
pass
###Output
_____no_output_____
###Markdown
The database is simply a Python `dict` that is exposed only through SQL queries.
###Code
class DB:
def __init__(self, db={}):
self.db = dict(db)
###Output
_____no_output_____
###Markdown
Representing TablesThe database contains tables, which are created by a method call `create_table()`. Each table data structure is a pair of values. The first one is the meta data containing column names and types. The second value is a list of values in the table.
###Code
class DB(DB):
def create_table(self, table, defs):
self.db[table] = (defs, [])
###Output
_____no_output_____
###Markdown
In a similar fashion, we can prevent SQL and code injections discussed in [the chapter on Web fuzzing](WebFuzzer.ipynb). Taint Aware FuzzingWe can also use tainting to _direct fuzzing to those grammar rules that are likely to generate dangerous inputs._ The idea here is to identify inputs generated by our fuzzer that lead to untrusted execution. First we define the exception to be thrown when a tainted value reaches a dangerous operation.
###Code
class Tainted(Exception):
def __init__(self, v):
self.v = v
def __str__(self):
return 'Tainted[%s]' % self.v
###Output
_____no_output_____
###Markdown
TaintedDBNext, since `my_eval()` is the most dangerous operation in the `DB` class, we define a new class `TaintedDB` that overrides the `my_eval()` to throw an exception whenever an untrusted string reaches this part.
###Code
class TaintedDB(DB):
def my_eval(self, statement, g, l):
if statement.taint != 'TRUSTED':
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
We initialize an instance of `TaintedDB`
###Code
tdb = TaintedDB()
tdb.db = db.db
###Output
_____no_output_____
###Markdown
Then we start fuzzing.
###Code
import traceback
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = tdb.sql(tstr(query, taint='UNTRUSTED'))
print(repr(res))
except SQLException as e:
pass
except Tainted as e:
print("> ", e)
except:
traceback.print_exc()
break
print()
###Output
'delete from inventory where y/u-l+f/y<Y(c)/A-H*q'
> Tainted[y/u-l+f/y<Y(c)/A-H*q]
"insert into inventory (G,Wmp,sl3hku3) values ('<','?')"
"insert into inventory (d0) values (',_G')"
'select P*Q-w/x from inventory where X<j==:==j*r-f'
> Tainted[(X<j==:==j*r-f)]
'select a>F*i from inventory where Q/I-_+P*j>.'
> Tainted[(Q/I-_+P*j>.)]
'select (V-i<T/g) from inventory where T/r/G<FK(m)/(i)'
> Tainted[(T/r/G<FK(m)/(i))]
'select (((i))),_(S,_)/L-k<H(Sv,R,n,W,Y) from inventory'
> Tainted[((((i))),_(S,_)/L-k<H(Sv,R,n,W,Y))]
'select (N==c*U/P/y),i-e/n*y,T!=w,u from inventory'
> Tainted[((N==c*U/P/y),i-e/n*y,T!=w,u)]
'update inventory set _=B,n=v where o-p*k-J>T'
'select s from inventory where w4g4<.m(_)/_>t'
> Tainted[(w4g4<.m(_)/_>t)]
###Markdown
One can see that `insert`, `update`, `select` and `delete` statements on an existing table lead to taint exceptions. We can now focus on these specific kinds of inputs. However, this is not the only thing we can do. We will see how we can identify specific portions of input that reached tainted execution using character origins in the later sections. But before that, we explore other uses of taints. Preventing Privacy LeaksUsing taints, we can also ensure that secret information does not leak out. We can assign a special taint `"SECRET"` to strings whose information must not leak out:
###Code
secrets = tstr('<Plenty of secret keys>', taint='SECRET')
###Output
_____no_output_____
###Markdown
Accessing any substring of `secrets` will propagate the taint:
###Code
secrets[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
Consider the _heartbeat_ security leak from [the chapter on Fuzzing](Fuzzer.ipynb), in which a server would accidentally reply not only the user input sent to it, but also secret memory. If the reply consists only of the user input, there is no taint associated with it:
###Code
user_input = "hello"
reply = user_input
isinstance(reply, tstr)
###Output
_____no_output_____
###Markdown
If, however, the reply contains _any_ part of the secret, the reply will be tainted:
###Code
reply = user_input + secrets[0:5]
reply
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
The output function of our server would now ensure that the data sent back does not contain any secret information:
###Code
def send_back(s):
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
...
with ExpectError():
send_back(reply)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58879/3747050841.py", line 2, in <module>
send_back(reply)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58879/3158733057.py", line 2, in send_back
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
AssertionError (expected)
###Markdown
Our `tstr` solution can help to identify information leaks – but it is by no means complete. If we actually take the `heartbeat()` implementation from [the chapter on Fuzzing](Fuzzer.ipynb), we will see that _any_ reply is marked as `SECRET` – even those not even accessing secret memory:
###Code
from Fuzzer import heartbeat
reply = heartbeat('hello', 5, memory=secrets)
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
Why is this? If we look into the implementation of `heartbeat()`, we will see that it first builds a long string `memory` from the (non-secret) reply and the (secret) memory, before returning the first characters from `memory`.```python Store reply in memory memory = reply + memory[len(reply):]```At this point, the whole memory still is tainted as `SECRET`, _including_ the non-secret part from `reply`. We may be able to circumvent the issue by tagging the `reply` as `PUBLIC` – but then, this taint would be in conflict with the `SECRET` tag of `memory`. What happens if we compose a string from two differently tainted strings?
###Code
thilo = tstr("High", taint='HIGH') + tstr("Low", taint='LOW')
###Output
_____no_output_____
###Markdown
It turns out that in this case, the `__add__()` method takes precedence over the `__radd__()` method, which means that the right-hand `"Low"` string is treated as a regular (non-tainted) string.
###Code
thilo
thilo.taint # type: ignore
###Output
_____no_output_____
###Markdown
We could set up the `__add__()` and other methods with special handling for conflicting taints. However, the way this conflict should be resolved would be highly _application-dependent_:* If we use taints to indicate _privacy levels_, `SECRET` privacy should take precedence over `PUBLIC` privacy. Any combination of a `SECRET`-tainted string and a `PUBLIC`-tainted string thus should have a `SECRET` taint.* If we use taints to indicate _origins_ of information, an `UNTRUSTED` origin should take precedence over a `TRUSTED` origin. Any combination of an `UNTRUSTED`-tainted string and a `TRUSTED`-tainted string thus should have an `UNTRUSTED` taint.Of course, such conflict resolutions can be implemented. But even so, they will not help us in the `heartbeat()` example differentiating secret from non-secret output data. Tracking Individual CharactersFortunately, there is a better, more generic way to solve the above problems. The key to composition of differently tainted strings is to assign taints not only to strings, but actually to every bit of information – in our case, characters. If every character has a taint on its own, a new composition of characters will simply inherit this very taint _per character_. To this end, we introduce a second bit of information named _origin_. Distinguishing various untrusted sources may be accomplished by origining each instance as separate instance (called *colors* in dynamic origin research). You will see an instance of this technique in the chapter on [Grammar Mining](GrammarMiner.ipynb). In this section, we carry *character level* origins. That is, given a fragment that resulted from a portion of the original origined string, one will be able to tell which portion of the input string the fragment was taken from. In essence, each input character index from an origined source gets its own color. More complex origining such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter. A Class for Tracking Character OriginsLet us introduce a class `ostr` which, like `tstr`, carries a taint for each string, and additionally an _origin_ for each character that indicates its source. It is a consecutive number in a particular range (by default, starting with zero) indicating its _position_ within a specific origin.
###Code
class ostr(str):
"""Wrapper for strings, saving taint and origin information"""
DEFAULT_ORIGIN = 0
def __new__(cls, value, *args, **kw):
"""Create an ostr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None,
origin: Optional[Union[int, List[int]]] = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `ostr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings.
`origin` (optional) is either
- an integer denoting the index of the first character in `value`, or
- a list of integers denoting the origins of the characters in `value`,
"""
self.taint = taint
if origin is None:
origin = ostr.DEFAULT_ORIGIN
if isinstance(origin, int):
self.origin = list(range(origin, origin + len(self)))
else:
self.origin = origin
assert len(self.origin) == len(self)
###Output
_____no_output_____
###Markdown
As with `tstr`, above, we implement methods for conversion into (regular) Python strings:
###Code
class ostr(ostr):
def create(self, s):
return ostr(s, taint=self.taint, origin=self.origin)
class ostr(ostr):
UNKNOWN_ORIGIN = -1
def __repr__(self):
# handle escaped chars
origin = [ostr.UNKNOWN_ORIGIN]
for s, o in zip(str(self), self.origin):
origin.extend([o] * (len(repr(s)) - 2))
origin.append(ostr.UNKNOWN_ORIGIN)
return ostr(str.__repr__(self), taint=self.taint, origin=origin)
class ostr(ostr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
By default, character origins start with `0`:
###Code
othello = ostr('hello')
assert othello.origin == [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
We can also specify the starting origin as below -- `6..10`
###Code
tworld = ostr('world', origin=6)
assert tworld.origin == [6, 7, 8, 9, 10]
a = ostr("hello\tworld")
repr(a).origin # type: ignore
###Output
_____no_output_____
###Markdown
`str()` returns a `str` instance without origin or taint information:
###Code
assert type(str(othello)) == str
###Output
_____no_output_____
###Markdown
`repr()`, however, keeps the origin information for the original string:
###Code
repr(othello)
repr(othello).origin # type: ignore
###Output
_____no_output_____
###Markdown
Just as with taints, we can clear origins and check whether an origin is present:
###Code
class ostr(ostr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
class ostr(ostr):
def clear_origin(self):
self.origin = [self.UNKNOWN_ORIGIN] * len(self)
return self
def has_origin(self):
return any(origin != self.UNKNOWN_ORIGIN for origin in self.origin)
othello = ostr('Hello')
assert othello.has_origin()
othello.clear_origin()
assert not othello.has_origin()
###Output
_____no_output_____
###Markdown
In the remainder of this section, we re-implement various string methods such that they also keep track of origins. If this is too tedious for you, jump right [to the next section](Checking-Origins) which gives a number of usage examples. Excursion: Implementing String Methods CreateWe need to create new substrings that are wrapped in `ostr` objects. However, we also want to allow our subclasses to create their own instances. Hence we again provide a `create()` method that produces a new `ostr` instance.
###Code
class ostr(ostr):
def create(self, res, origin=None):
return ostr(res, taint=self.taint, origin=origin)
othello = ostr('hello', taint='HIGH')
otworld = othello.create('world', origin=6)
otworld.origin
otworld.taint
assert (othello.origin, otworld.origin) == (
[0, 1, 2, 3, 4], [6, 7, 8, 9, 10])
###Output
_____no_output_____
###Markdown
IndexIn Python, indexing is provided through `__getitem__()`. Indexing on positive integers is simple enough. However, it has two additional wrinkles. The first is that, if the index is negative, that many characters are counted from the end of the string which lies just after the last character. That is, the last character has a negative index `-1`
###Code
class ostr(ostr):
def __getitem__(self, key):
res = super().__getitem__(key)
if isinstance(key, int):
key = len(self) + key if key < 0 else key
return self.create(res, [self.origin[key]])
elif isinstance(key, slice):
return self.create(res, self.origin[key])
else:
assert False
ohello = ostr('hello', taint='HIGH')
assert (ohello[0], ohello[-1]) == ('h', 'o')
ohello[0].taint
###Output
_____no_output_____
###Markdown
The other wrinkle is that `__getitem__()` can accept a slice. We discuss this next. SlicesThe Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method, which returns a custom `iterator`.
###Code
class ostr(ostr):
def __iter__(self):
return ostr_iterator(self)
###Output
_____no_output_____
###Markdown
The `__iter__()` method requires a supporting `iterator` object. The `iterator` is used to save the state of the current iteration, which it does by keeping a reference to the original `ostr`, and the current index of iteration `_str_idx`.
###Code
class ostr_iterator():
def __init__(self, ostr):
self._ostr = ostr
self._str_idx = 0
def __next__(self):
if self._str_idx == len(self._ostr):
raise StopIteration
# calls ostr getitem should be ostr
c = self._ostr[self._str_idx]
assert isinstance(c, ostr)
self._str_idx += 1
return c
###Output
_____no_output_____
###Markdown
Bringing all these together:
###Code
thw = ostr('hello world', taint='HIGH')
thw[0:5]
assert thw[0:5].has_taint()
assert thw[0:5].has_origin()
thw[0:5].taint
thw[0:5].origin
###Output
_____no_output_____
###Markdown
Splits
###Code
def make_split_wrapper(fun):
def proxy(self, *args, **kwargs):
lst = fun(self, *args, **kwargs)
return [self.create(elem) for elem in lst]
return proxy
for name in ['split', 'rsplit', 'splitlines']:
fun = getattr(str, name)
setattr(ostr, name, make_split_wrapper(fun))
othello = ostr('hello world', taint='LOW')
othello == 'hello world'
othello.split()[0].taint # type: ignore
###Output
_____no_output_____
###Markdown
(Exercise for the reader: handle _partitions_, i.e., splitting a string by substrings) ConcatenationIf two origined strings are concatenated together, it may be desirable to transfer the origins from each to the corresponding portion of the resulting string. The concatenation of strings is accomplished by overriding `__add__()`.
###Code
class ostr(ostr):
def __add__(self, other):
if isinstance(other, ostr):
return self.create(str.__add__(self, other),
(self.origin + other.origin))
else:
return self.create(str.__add__(self, other),
(self.origin + [self.UNKNOWN_ORIGIN for i in other]))
###Output
_____no_output_____
###Markdown
Testing concatenations between two `ostr` instances:
###Code
othello = ostr("hello")
otworld = ostr("world", origin=6)
othw = othello + otworld
assert othw.origin == [0, 1, 2, 3, 4, 6, 7, 8, 9, 10] # type: ignore
###Output
_____no_output_____
###Markdown
What if a `ostr` is concatenated with a `str`?
###Code
space = " "
th_w = othello + space + otworld
assert th_w.origin == [
0,
1,
2,
3,
4,
ostr.UNKNOWN_ORIGIN,
ostr.UNKNOWN_ORIGIN,
6,
7,
8,
9,
10]
###Output
_____no_output_____
###Markdown
One wrinkle here is that when adding a `ostr` and a `str`, the user may place the `str` first, in which case, the `__add__()` method will be called on the `str` instance. Not on the `ostr` instance. However, Python provides a solution. If one defines `__radd__()` on the `ostr` instance, that method will be called rather than `str.__add__()`
###Code
class ostr(ostr):
def __radd__(self, other):
origin = other.origin if isinstance(other, ostr) else [
self.UNKNOWN_ORIGIN for i in other]
return self.create(str.__add__(other, self), (origin + self.origin))
###Output
_____no_output_____
###Markdown
We test it out:
###Code
shello = "hello"
otworld = ostr("world")
thw = shello + otworld
assert thw.origin == [ostr.UNKNOWN_ORIGIN] * len(shello) + [0, 1, 2, 3, 4] # type: ignore
###Output
_____no_output_____
###Markdown
These methods: `slicing` and `concatenation` is sufficient to implement other string methods that result in a string, and does not change the character underneath (i.e no case change). Hence, we look at a helper method next. Extract Origin StringGiven a specific input index, the method `x()` extracts the corresponding origined portion from a `ostr`. As a convenience it supports `slices` along with `ints`.
###Code
class ostr(ostr):
class TaintException(Exception):
pass
def x(self, i=0):
"""Extract substring at index/slice `i`"""
if not self.origin:
raise origin.TaintException('Invalid request idx')
if isinstance(i, int):
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j == i]]
elif isinstance(i, slice):
r = range(i.start or 0, i.stop or len(self), i.step or 1)
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j in r]]
thw = ostr('hello world', origin=100)
assert thw.x(101) == ['e']
assert thw.x(slice(101, 105)) == ['e', 'l', 'l', 'o']
###Output
_____no_output_____
###Markdown
Replace The `replace()` method replaces a portion of the string with another.
###Code
class ostr(ostr):
def replace(self, a, b, n=None):
old_origin = self.origin
b_origin = b.origin if isinstance(
b, ostr) else [self.UNKNOWN_ORIGIN] * len(b)
mystr = str(self)
i = 0
while True:
if n and i >= n:
break
idx = mystr.find(a)
if idx == -1:
break
last = idx + len(a)
mystr = mystr.replace(a, b, 1)
partA, partB = old_origin[0:idx], old_origin[last:]
old_origin = partA + b_origin + partB
i += 1
return self.create(mystr, old_origin)
my_str = ostr("aa cde aa")
res = my_str.replace('aa', 'bb')
assert res, res.origin == ('bb', 'cde', 'bb',
[ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN,
2, 3, 4, 5, 6,
ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN])
my_str = ostr("aa cde aa")
res = my_str.replace('aa', ostr('bb', origin=100))
assert (
res, res.origin) == (
('bb cde bb'), [
100, 101, 2, 3, 4, 5, 6, 100, 101])
###Output
_____no_output_____
###Markdown
Split We essentially have to re-implement split operations, and split by space is slightly different from other splits.
###Code
class ostr(ostr):
def _split_helper(self, sep, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = len(sep)
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
first_idx = last_idx + sep_len
return result_list
def _split_space(self, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = 0
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
v = str(self[last_idx:])
sep_len = len(v) - len(v.lstrip(' '))
first_idx = last_idx + sep_len
return result_list
def rsplit(self, sep=None, maxsplit=-1):
splitted = super().rsplit(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
def split(self, sep=None, maxsplit=-1):
splitted = super().split(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
my_str = ostr('ab cdef ghij kl')
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([0, 1], [3, 4, 5, 6], [8, 9, 10, 11], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 15)))
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert(ab.origin, cdef.origin, kl.origin) == ([0, 1], [3, 4, 5, 6], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=100, taint='HIGH')
ab, cdef, ghij, kl = my_str.rsplit()
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([100, 101], [105, 106, 107, 108], [110, 111, 112, 113],
[118, 119])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 20)), taint='HIGH')
ab, cdef, ghij, kl = my_str.split()
assert (ab.origin, cdef.origin, kl.origin) == ([0, 1], [5, 6, 7, 8], [18, 19])
assert ab.taint == 'HIGH'
###Output
_____no_output_____
###Markdown
Strip
###Code
class ostr(ostr):
def strip(self, cl=None):
return self.lstrip(cl).rstrip(cl)
def lstrip(self, cl=None):
res = super().lstrip(cl)
i = self.find(res)
return self[i:]
def rstrip(self, cl=None):
res = super().rstrip(cl)
return self[0:len(res)]
my_str1 = ostr(" abc ")
v = my_str1.strip()
assert v, v.origin == ('abc', [2, 3, 4])
my_str1 = ostr(" abc ")
v = my_str1.lstrip()
assert (v, v.origin) == ('abc ', [2, 3, 4, 5, 6])
my_str1 = ostr(" abc ")
v = my_str1.rstrip()
assert (v, v.origin) == (' abc', [0, 1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
Expand Tabs
###Code
class ostr(ostr):
def expandtabs(self, n=8):
parts = self.split('\t')
res = super().expandtabs(n)
all_parts = []
for i, p in enumerate(parts):
all_parts.extend(p.origin)
if i < len(parts) - 1:
l = len(all_parts) % n
all_parts.extend([p.origin[-1]] * l)
return self.create(res, all_parts)
my_s = str("ab\tcd")
my_ostr = ostr("ab\tcd")
v1 = my_s.expandtabs(4)
v2 = my_ostr.expandtabs(4)
assert str(v1) == str(v2)
assert (len(v1), repr(v2), v2.origin) == (6, "'ab cd'", [0, 1, 1, 1, 3, 4])
class ostr(ostr):
def join(self, iterable):
mystr = ''
myorigin = []
sep_origin = self.origin
lst = list(iterable)
for i, s in enumerate(lst):
sorigin = s.origin if isinstance(s, ostr) else [
self.UNKNOWN_ORIGIN] * len(s)
myorigin.extend(sorigin)
mystr += str(s)
if i < len(lst) - 1:
myorigin.extend(sep_origin)
mystr += str(self)
res = super().join(iterable)
assert len(res) == len(mystr)
return self.create(res, myorigin)
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr('').join([v2, v3, v1])
assert (
v4, v4.origin) == (
'cdefab', [
103, 104, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 100, 101])
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr(',').join([v2, v3, v1])
assert (v4, v4.origin) == ('cd,ef,ab',
[103, 104, 0, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 0, 100, 101]) # type: ignore
###Output
_____no_output_____
###Markdown
Partitions
###Code
class ostr(ostr):
def partition(self, sep):
partA, sep, partB = super().partition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
def rpartition(self, sep):
partA, sep, partB = super().rpartition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
###Output
_____no_output_____
###Markdown
Justify
###Code
class ostr(ostr):
def ljust(self, width, fillchar=' '):
res = super().ljust(width, fillchar)
initial = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, [t] * initial + self.origin)
class ostr(ostr):
def rjust(self, width, fillchar=' '):
res = super().rjust(width, fillchar)
final = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, self.origin + [t] * final)
###Output
_____no_output_____
###Markdown
mod
###Code
class ostr(ostr):
def __mod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
s_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = self.find('%s')
assert i >= 0
res = super().__mod__(s)
r_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
class ostr(ostr):
def __rmod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
r_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = s.find('%s')
assert i >= 0
res = super().__rmod__(s)
s_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
a = ostr('hello %s world', origin=100)
a
(a % 'good').origin
b = 'hello %s world'
c = ostr('bad', origin=10)
(b % c).origin
###Output
_____no_output_____
###Markdown
String methods that do not change origin
###Code
class ostr(ostr):
def swapcase(self):
return self.create(str(self).swapcase(), self.origin)
def upper(self):
return self.create(str(self).upper(), self.origin)
def lower(self):
return self.create(str(self).lower(), self.origin)
def capitalize(self):
return self.create(str(self).capitalize(), self.origin)
def title(self):
return self.create(str(self).title(), self.origin)
a = ostr('aa', origin=100).upper()
a, a.origin
###Output
_____no_output_____
###Markdown
General wrappers These are not strictly needed for operation, but can be useful for tracing.
###Code
def make_basic_str_wrapper(fun): # type: ignore
def proxy(*args, **kwargs):
res = fun(*args, **kwargs)
return res
return proxy
import inspect
import types
def informationflow_init_2():
ostr_members = [name for name, fn in inspect.getmembers(ostr, callable)
if isinstance(fn, types.FunctionType) and fn.__qualname__.startswith('ostr')]
for name, fn in inspect.getmembers(str, callable):
if name not in set(['__class__', '__new__', '__str__', '__init__',
'__repr__', '__getattribute__']) | set(ostr_members):
setattr(ostr, name, make_basic_str_wrapper(fn))
informationflow_init_2()
INITIALIZER_LIST.append(informationflow_init_2)
###Output
_____no_output_____
###Markdown
Methods yet to be translated These methods generate strings from other strings. However, we do not have the right implementations for any of these. Hence these are marked as dangerous until we can generate the right translations.
###Code
def make_str_abort_wrapper(fun):
def proxy(*args, **kwargs):
raise ostr.TaintException(
'%s Not implemented in `ostr`' %
fun.__name__)
return proxy
def informationflow_init_3():
for name, fn in inspect.getmembers(str, callable):
# Omitted 'splitlines' as this is needed for formatting output in
# IPython/Jupyter
if name in ['__format__', 'format_map', 'format',
'__mul__', '__rmul__', 'center', 'zfill', 'decode', 'encode']:
setattr(ostr, name, make_str_abort_wrapper(fn))
informationflow_init_3()
INITIALIZER_LIST.append(informationflow_init_3)
###Output
_____no_output_____
###Markdown
While generating proxy wrappers for string operations can handle most common cases of transmission of information flow, some of the operations involving strings can not be overridden. For example, consider the following. End of Excursion Checking OriginsWith all this implemented, we now have full-fledged `ostr` strings where we can easily check the origin of each and every character. To check whether a string originates from another string, we can convert the origin to a set and resort to standard set operations:
###Code
s = ostr("hello", origin=100)
s[1]
s[1].origin
set(s[1].origin) <= set(s.origin)
t = ostr("world", origin=200)
set(s.origin) <= set(t.origin)
u = s + t + "!"
u.origin
ostr.UNKNOWN_ORIGIN in u.origin
###Output
_____no_output_____
###Markdown
Privacy Leaks RevisitedLet us apply it to see whether we can come up with a satisfactory solution for checking the `heartbeat()` function against information leakage.
###Code
SECRET_ORIGIN = 1000
###Output
_____no_output_____
###Markdown
We define a "secret" that must not leak out:
###Code
secret = ostr('<again, some super-secret input>', origin=SECRET_ORIGIN)
###Output
_____no_output_____
###Markdown
Each and every character in `secret` has an origin starting with `SECRET_ORIGIN`:
###Code
print(secret.origin)
###Output
[1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031]
###Markdown
If we now invoke `heartbeat()` with a given string, the origin of the reply should all be `UNKNOWN_ORIGIN` (from the input), and none of the characters should have a `SECRET_ORIGIN`.
###Code
hello_s = heartbeat('hello', 5, memory=secret)
hello_s
assert isinstance(hello_s, ostr)
print(hello_s.origin)
###Output
[-1, -1, -1, -1, -1]
###Markdown
We can verify that the secret did not leak out by formulating appropriate assertions:
###Code
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
_____no_output_____
###Markdown
All assertions pass, again confirming that no secret leaked out. Let us now go and exploit `heartbeat()` to reveal its secrets. As `heartbeat()` is unchanged, it is as vulnerable as it was:
###Code
hello_s = heartbeat('hello', 32, memory=secret)
hello_s
###Output
_____no_output_____
###Markdown
Now, however, the reply _does_ contain secret information:
###Code
assert isinstance(hello_s, ostr)
print(hello_s.origin)
with ExpectError():
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
with ExpectError():
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
with ExpectError():
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58879/1577803914.py", line 2, in <module>
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
AssertionError (expected)
###Markdown
We can now integrate these assertions into the `heartbeat()` function, causing it to fail before leaking information. Additionally (or alternatively?), we can also rewrite our output functions not to give out any secret information. We will leave these two exercises for the reader. Taint-Directed FuzzingThe previous _Taint Aware Fuzzing_ was a bit unsatisfactory in that we could not focus on the specific parts of the grammar that led to dangerous operations. We fix that with _taint directed fuzzing_ using `TrackingDB`.The idea here is to track the origins of each character that reaches `eval`. Then, track it back to the grammar nodes that generated it, and increase the probability of using those nodes again. TrackingDBThe `TrackingDB` is similar to `TaintedDB`. The difference is that, if we find that the execution has reached the `my_eval`, we simply raise the `Tainted`.
###Code
class TrackingDB(TaintedDB):
def my_eval(self, statement, g, l):
if statement.origin:
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
Next, we need a specially crafted fuzzer that preserves the taints. TaintedGrammarFuzzerWe define a `TaintedGrammarFuzzer` class that ensures that the taints propagate to the derivation tree. This is similar to the `GrammarFuzzer` from the [chapter on grammar fuzzers](GrammarFuzzer.ipynb) except that the origins and taints are preserved.
###Code
import random
from GrammarFuzzer import GrammarFuzzer
from Parser import canonical
class TaintedGrammarFuzzer(GrammarFuzzer):
def __init__(self,
grammar,
start_symbol=START_SYMBOL,
expansion_switch=1,
log=False):
self.tainted_start_symbol = ostr(
start_symbol, origin=[1] * len(start_symbol))
self.expansion_switch = expansion_switch
self.log = log
self.grammar = grammar
self.c_grammar = canonical(grammar)
self.init_tainted_grammar()
def expansion_cost(self, expansion, seen=set()):
symbols = [e for e in expansion if e in self.c_grammar]
if len(symbols) == 0:
return 1
if any(s in seen for s in symbols):
return float('inf')
return sum(self.symbol_cost(s, seen) for s in symbols) + 1
def fuzz_tree(self):
tree = (self.tainted_start_symbol, [])
nt_leaves = [tree]
expansion_trials = 0
while nt_leaves:
idx = random.randint(0, len(nt_leaves) - 1)
key, children = nt_leaves[idx]
expansions = self.ct_grammar[key]
if expansion_trials < self.expansion_switch:
expansion = random.choice(expansions)
else:
costs = [self.expansion_cost(e) for e in expansions]
m = min(costs)
all_min = [i for i, c in enumerate(costs) if c == m]
expansion = expansions[random.choice(all_min)]
new_leaves = [(token, []) for token in expansion]
new_nt_leaves = [e for e in new_leaves if e[0] in self.ct_grammar]
children[:] = new_leaves
nt_leaves[idx:idx + 1] = new_nt_leaves
if self.log:
print("%-40s" % (key + " -> " + str(expansion)))
expansion_trials += 1
return tree
def fuzz(self):
self.derivation_tree = self.fuzz_tree()
return self.tree_to_string(self.derivation_tree)
###Output
_____no_output_____
###Markdown
We use a specially prepared tainted grammar for fuzzing. We mark each individual definition, each individual rule, and each individual token with a separate origin (we chose a token boundary of 10 here, after inspecting the grammar). This allows us to track exactly which parts of the grammar were involved in the operations we are interested in.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def init_tainted_grammar(self):
key_increment, alt_increment, token_increment = 1000, 100, 10
key_origin = key_increment
self.ct_grammar = {}
for key, val in self.c_grammar.items():
key_origin += key_increment
os = []
for v in val:
ts = []
key_origin += alt_increment
for t in v:
nt = ostr(t, origin=key_origin)
key_origin += token_increment
ts.append(nt)
os.append(ts)
self.ct_grammar[key] = os
# a use tracking grammar
self.ctp_grammar = {}
for key, val in self.ct_grammar.items():
self.ctp_grammar[key] = [(v, dict(use=0)) for v in val]
###Output
_____no_output_____
###Markdown
As before, we initialize the `TrackingDB`
###Code
trdb = TrackingDB(db.db)
###Output
_____no_output_____
###Markdown
Finally, we need to ensure that the taints are preserved, when the tree is converted back to a string. For this, we define the `tainted_tree_to_string()`
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def tree_to_string(self, tree):
symbol, children, *_ = tree
e = ostr('')
if children:
return e.join([self.tree_to_string(c) for c in children])
else:
return e if symbol in self.c_grammar else symbol
###Output
_____no_output_____
###Markdown
We define `update_grammar()` that accepts a set of origins that reached the dangerous operations and the derivation tree of the original string used for fuzzing to update the enhanced grammar.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def update_grammar(self, origin, dtree):
def update_tree(dtree, origin):
key, children = dtree
if children:
updated_children = [update_tree(c, origin) for c in children]
corigin = set.union(
*[o for (key, children, o) in updated_children])
corigin = corigin.union(set(key.origin))
return (key, children, corigin)
else:
my_origin = set(key.origin).intersection(origin)
return (key, [], my_origin)
key, children, oset = update_tree(dtree, set(origin))
for key, alts in self.ctp_grammar.items():
for alt, o in alts:
alt_origins = set([i for token in alt for i in token.origin])
if alt_origins.intersection(oset):
o['use'] += 1
###Output
_____no_output_____
###Markdown
With these, we are now ready to fuzz.
###Code
def tree_type(tree):
key, children = tree
return (type(key), key, [tree_type(c) for c in children])
tgf = TaintedGrammarFuzzer(INVENTORY_GRAMMAR_F)
x = None
for _ in range(10):
qtree = tgf.fuzz_tree()
query = tgf.tree_to_string(qtree)
assert isinstance(query, ostr)
try:
print(repr(query))
res = trdb.sql(query)
print(repr(res))
except SQLException as e:
print(e)
except Tainted as e:
print(e)
origin = e.args[0].origin
tgf.update_grammar(origin, qtree)
except:
traceback.print_exc()
break
print()
###Output
'select (g!=(9)!=((:)==2==9)!=J)==-7 from inventory'
Tainted[((g!=(9)!=((:)==2==9)!=J)==-7)]
'delete from inventory where ((c)==T)!=5==(8!=Y)!=-5'
Tainted[((c)==T)!=5==(8!=Y)!=-5]
'select (((w==(((X!=------8)))))) from inventory'
Tainted[((((w==(((X!=------8)))))))]
'delete from inventory where ((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))'
Tainted[((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))]
'delete from inventory where ((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))'
Tainted[((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))]
'delete from inventory where o!=2==---5==3!=t'
Tainted[o!=2==---5==3!=t]
'select (2) from inventory'
Tainted[((2))]
'select _ from inventory'
Tainted[(_)]
'select L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)) from inventory'
Tainted[(L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)))]
'delete from inventory where _==(7==(9)!=(---5)==1)==-8'
Tainted[_==(7==(9)!=(---5)==1)==-8]
###Markdown
We can now inspect our enhanced grammar to see how many times each rule was used.
###Code
tgf.ctp_grammar
###Output
_____no_output_____
###Markdown
From here, the idea is to focus on the rules that reached dangerous operations more often, and increase the probability of the values of that kind. The Limits of Taint TrackingWhile our framework can detect information leakage, it is by no means perfect. There are several ways in which taints can get lost and information thus may still leak out. ConversionsWe only track taints and origins through _strings_ and _characters_. If we convert these to numbers (or other data), the information is lost. As an example, consider this function, converting individual characters to numbers and back:
###Code
def strip_all_info(s):
t = ""
for c in s:
t += chr(ord(c))
return t
othello = ostr("Secret")
othello
othello.origin # type: ignore
###Output
_____no_output_____
###Markdown
The taints and origins will not propagate through the number conversion:
###Code
thello_stripped = strip_all_info(thello)
thello_stripped
with ExpectError():
thello_stripped.origin
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58879/588526133.py", line 2, in <module>
thello_stripped.origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
This issue could be addressed by extending numbers with taints and origins, just as we did for strings. At some point, however, this will still break down, because as soon as an internal C function in the Python library is reached, the taint will not propagate into and across the C function. (Unless one starts implementing dynamic taints for these, that is.) Internal C libraries As we mentioned before, calls to _internal_ C libraries do not propagate taints. For example, while the following preserves the taints,
###Code
hello = ostr('hello', origin=100)
world = ostr('world', origin=200)
(hello + ' ' + world).origin
###Output
_____no_output_____
###Markdown
a call to a `join` that should be equivalent will fail.
###Code
with ExpectError():
''.join([hello, ' ', world]).origin # type: ignore
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58879/2341342688.py", line 2, in <module>
''.join([hello, ' ', world]).origin # type: ignore
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
Implicit Information FlowEven if one could taint all data in a program, there still would be means to break information flow – notably by turning explicit flow into _implicit_ flow, or data flow into _control flow_. Here is an example:
###Code
def strip_all_info_again(s):
t = ""
for c in s:
if c == 'a':
t += 'a'
elif c == 'b':
t += 'b'
elif c == 'c':
t += 'c'
...
###Output
_____no_output_____
###Markdown
With such a function, there is no explicit data flow between the characters in `s` and the characters in `t`; yet, the strings would be identical. This problem frequently occurs in programs that process and manipulate external input. Enforcing TaintingBoth, conversions and implicit information flow are one of several possibilities how taint and origin information get lost. To address the problem, the best solution is to _always assume the worst from untainted strings_:* As it comes to trust, an untainted string should be treated as _possibly untrusted_, and hence not relied upon unless sanitized.* As it comes to privacy, an untainted string should be treated as _possibly secret_, and hence not leaked out.As a consequence, your program should always have two kinds of taints: one for explicitly trusted (or secret) and one for explicitly untrusted (or non-secret). If a taint gets lost along the way, you will may have to restore it from its sources – not unlike the string methods discussed above. The benefit is a trusted application, in which each and every information flow can be checked at runtime, with violations quickly discovered through automated tests. Synopsis This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:
###Code
thello[:4]
###Output
_____no_output_____
###Markdown
However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:
###Code
thello.taint
###Output
_____no_output_____
###Markdown
The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:
###Code
thello[1:2].taint # type: ignore
###Output
_____no_output_____
###Markdown
`tstr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
from ClassDiagram import display_class_hierarchy
display_class_hierarchy(tstr)
###Output
_____no_output_____
###Markdown
Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:
###Code
secret = ostr("joshua1234", origin=100, taint='SECRET')
###Output
_____no_output_____
###Markdown
The `origin` attribute of an `ostr` provides access to a list of indexes:
###Code
secret.origin
secret.taint
###Output
_____no_output_____
###Markdown
`ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:
###Code
secret_substr = (secret[0:4] + "-" + secret[6:])
secret_substr.taint
secret_substr.origin
###Output
_____no_output_____
###Markdown
`ostr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
display_class_hierarchy(ostr)
###Output
_____no_output_____
###Markdown
Lessons Learned* String-based and character-based taints allow to dynamically track the information flow from input to the internals of a system and back to the output.* Checking taints allows to discover untrusted inputs and information leakage at runtime.* Data conversions and implicit data flow may strip taint information; the resulting untainted strings should be treated as having the worst possible taint.* Taints can be used in conjunction with fuzzing to provide a more robust indication of incorrect behavior than to simply rely on program crashes. Next StepsAn even better alternative to our taint-directed fuzzing is to make use of _symbolic_ techniques that take the semantics of the program under test into account. The chapter on [flow fuzzing](FlowFuzzer.ipynb) introduces these symbolic techniques for the purpose of exploring information flows; the subsequent chapter on [symbolic fuzzing](SymbolicFuzzer.ipynb) then shows how to make full-fledged use of symbolic execution for covering code. Similarly, [search based fuzzing](SearchBasedFuzzer.ipynb) can often provide a cheaper exploration strategy. BackgroundTaint analysis on Python using a library approach as we implemented in this chapter was discussed by Conti et al. \cite{Conti2010}. Exercises Exercise 1: Tainted NumbersIntroduce a class `tint` (for tainted integer) that, like `tstr`, has a taint attribute that gets passed on from `tint` to `tint`. Part 1: CreationImplement the `tint` class such that taints are set:```pythonx = tint(42, taint='SECRET')assert x.taint == 'SECRET'``` **Solution.** This is pretty straightforward, as we can apply the same scheme as for `tstr`:
###Code
class tint(int):
def __new__(cls, value, *args, **kw):
return int.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
x = tint(42, taint='SECRET')
assert x.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 2: Arithmetic expressionsEnsure that taints get passed along arithmetic expressions; support addition, subtraction, multiplication, and division operators.```pythony = x + 1assert y.taint == 'SECRET'``` **Solution.** As with `tstr`, we implement a `create()` method and a convenience function to quickly define all arithmetic operations:
###Code
class tint(tint):
def create(self, n):
# print("New tint from", n)
return tint(n, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_int_wrapper()` function creates a wrapper around an existing `int` method which attaches the taint to the result of the method:
###Code
def make_int_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
# print(fun, args, kwargs, "=", repr(res))
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all arithmetic operators:
###Code
for name in ['__add__', '__radd__', '__mul__', '__rmul__', '__sub__',
'__floordiv__', '__truediv__']:
fun = getattr(int, name)
setattr(tint, name, make_int_wrapper(fun))
x = tint(42, taint='SECRET')
y = x + 1
y.taint # type: ignore
###Output
_____no_output_____
###Markdown
Part 3: Passing taints from integers to stringsConverting a tainted integer into a string (using `repr()`) should yield a tainted string:```pythonx_s = repr(x)assert x_s.taint == 'SECRET'``` **Solution.** We define the string conversion functions such that they return a tainted string (`tstr`):
###Code
class tint(tint):
def __repr__(self) -> tstr:
s = int.__repr__(self)
return tstr(s, taint=self.taint)
class tint(tint):
def __str__(self) -> tstr:
return tstr(int.__str__(self), taint=self.taint)
x = tint(42, taint='SECRET')
x_s = repr(x)
assert isinstance(x_s, tstr)
assert x_s.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 4: Passing taints from strings to integersConverting a tainted object (with a `taint` attribute) to an integer should pass that taint:```pythonpassword = tstr('1234', taint='NOT_EXACTLY_SECRET')x = tint(password)assert x == 1234assert x.taint == 'NOT_EXACTLY_SECRET'``` **Solution.** This can be done by having the `__init__()` constructor check for a `taint` attibute:
###Code
class tint(tint):
def __init__(self, value, taint=None, **kwargs):
if taint is not None:
self.taint = taint
else:
self.taint = getattr(value, 'taint', None)
password = tstr('1234', taint='NOT_EXACTLY_SECRET')
x = tint(password)
assert x == 1234
assert x.taint == 'NOT_EXACTLY_SECRET'
###Output
_____no_output_____
###Markdown
Tracking Information FlowWe have explored how one could generate better inputs that can penetrate deeper into the program in question. While doing so, we have relied on program crashes to tell us that we have succeeded in finding problems in the program. However, that is rather simplistic. What if the behavior of the program is simply incorrect, but does not lead to a crash? Can one do better?In this chapter, we explore in depth how to track information flows in Python, and how these flows can be used to determine whether a program behaved as expected.
###Code
from bookutils import YouTubeVideo
YouTubeVideo('MJ0VGzVbhYc')
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [chapter on coverage](Coverage.ipynb).* You should have read the [chapter on probabilistic fuzzing](ProbabilisticGrammarFuzzer.ipynb). We first set up our infrastructure so that we can make use of previously defined functions.
###Code
import bookutils
from typing import List, Any, Optional, Union
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.InformationFlow import ```and then make use of the following features.This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:```python>>> thello = tstr('hello', taint='LOW')```A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:```python>>> thello[:4]'hell'```However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:```python>>> thello.taint'LOW'```The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:```python>>> thello[1:2].taint type: ignore'LOW'````tstr` objects duplicate most `str` methods, as indicated in the class diagram: Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:```python>>> secret = ostr("joshua1234", origin=100, taint='SECRET')```The `origin` attribute of an `ostr` provides access to a list of indexes:```python>>> secret.origin[100, 101, 102, 103, 104, 105, 106, 107, 108, 109]>>> secret.taint'SECRET'````ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:```python>>> secret_substr = (secret[0:4] + "-" + secret[6:])>>> secret_substr.taint'SECRET'>>> secret_substr.origin[100, 101, 102, 103, -1, 106, 107, 108, 109]````ostr` objects duplicate most `str` methods, as indicated in the class diagram: A Vulnerable DatabaseSay we want to implement an *in-memory database* service in Python. Here is a rather flimsy attempt. We use the following dataset.
###Code
INVENTORY = """\
1997,van,Ford,E350
2000,car,Mercury,Cougar
1999,car,Chevy,Venture\
"""
VEHICLES = INVENTORY.split('\n')
###Output
_____no_output_____
###Markdown
Our DB is a Python class that parses its arguments and throws `SQLException` which is defined below.
###Code
class SQLException(Exception):
pass
###Output
_____no_output_____
###Markdown
The database is simply a Python `dict` that is exposed only through SQL queries.
###Code
class DB:
def __init__(self, db={}):
self.db = dict(db)
###Output
_____no_output_____
###Markdown
Representing TablesThe database contains tables, which are created by a method call `create_table()`. Each table data structure is a pair of values. The first one is the meta data containing column names and types. The second value is a list of values in the table.
###Code
class DB(DB):
def create_table(self, table, defs):
self.db[table] = (defs, [])
###Output
_____no_output_____
###Markdown
The table can be retrieved using the name using the `table()` method call.
###Code
class DB(DB):
def table(self, t_name):
if t_name in self.db:
return self.db[t_name]
raise SQLException('Table (%s) was not found' % repr(t_name))
###Output
_____no_output_____
###Markdown
Here is an example of how to use both. We fill a table `inventory` with four columns: `year`, `kind`, `company`, and `model`. Initially, our table is empty.
###Code
def sample_db():
db = DB()
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
return db
###Output
_____no_output_____
###Markdown
Using `table()`, we can retrieve the table definition as well as its contents.
###Code
db = sample_db()
db.table('inventory')
###Output
_____no_output_____
###Markdown
We also define `column()` for retrieving the column definition from a table declaration.
###Code
class DB(DB):
def column(self, table_decl, c_name):
if c_name in table_decl:
return table_decl[c_name]
raise SQLException('Column (%s) was not found' % repr(c_name))
db = sample_db()
decl, rows = db.table('inventory')
db.column(decl, 'year')
###Output
_____no_output_____
###Markdown
Executing SQL StatementsThe `sql()` method of `DB` executes SQL statements. It inspects its arguments, and dispatches the query based on the kind of SQL statement to be executed.
###Code
class DB(DB):
def do_select(self, query):
...
def do_update(self, query):
...
def do_insert(self, query):
...
def do_delete(self, query):
...
def sql(self, query):
methods = [('select ', self.do_select),
('update ', self.do_update),
('insert into ', self.do_insert),
('delete from', self.do_delete)]
for key, method in methods:
if query.startswith(key):
return method(query[len(key):])
raise SQLException('Unknown SQL (%s)' % query)
###Output
_____no_output_____
###Markdown
Here's an example of how to use the `DB` class:
###Code
some_db = DB()
some_db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
However, at this point, the individual methods for handling SQL statements are not yet defined. Let us do this in the next steps. Excursion: Implementing SQL Statements Selecting DataThe `do_select()` method handles SQL `select` statements to retrieve data from a table.
###Code
class DB(DB):
def do_select(self, query):
FROM, WHERE = ' from ', ' where '
table_start = query.find(FROM)
if table_start < 0:
raise SQLException('no table specified')
where_start = query.find(WHERE)
select = query[:table_start]
if where_start >= 0:
t_name = query[table_start + len(FROM):where_start]
where = query[where_start + len(WHERE):]
else:
t_name = query[table_start + len(FROM):]
where = ''
_, table = self.table(t_name)
if where:
selected = self.expression_clause(table, "(%s)" % where)
selected_rows = [hm for i, data, hm in selected if data]
else:
selected_rows = table
rows = self.expression_clause(selected_rows, "(%s)" % select)
return [data for i, data, hm in rows]
###Output
_____no_output_____
###Markdown
The `expression_clause()` method is used for two purposes:1. In the form `select` $x$, $y$, $z$ `from` $t$, it _evaluates_ (and returns) the expressions $x$, $y$, $z$ in the contexts of the selected rows.2. If a clause `where` $p$ is given, it also evaluates $p$ in the context of the rows and includes the rows in the selection only if $p$ holds.To evaluate expressions like $x$, $y$, $z$ or $p$, we make use of the Python `eval()` evaluation function.
###Code
class DB(DB):
def expression_clause(self, table, statement):
selected = []
for i, hm in enumerate(table):
selected.append((i, self.my_eval(statement, {}, hm), hm))
return selected
###Output
_____no_output_____
###Markdown
Which internally calls `my_eval()` to evaluate any given statement.
###Code
class DB(DB):
def my_eval(self, statement, g, l):
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid WHERE (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
**Note:** Using `eval()` here introduces some important security issues, which we will discuss later in this chapter. Here's how we can use `sql()` to issue a query. Note that the table is yet empty.
###Code
db = sample_db()
db.sql('select year from inventory')
db = sample_db()
db.sql('select year from inventory where year == 2018')
###Output
_____no_output_____
###Markdown
Inserting DataThe `do_insert()` method handles SQL `insert` statements.
###Code
class DB(DB):
def do_insert(self, query):
VALUES = ' values '
table_end = query.find('(')
t_name = query[:table_end].strip()
names_end = query.find(')')
decls, table = self.table(t_name)
names = [i.strip() for i in query[table_end + 1:names_end].split(',')]
# verify columns exist
for k in names:
self.column(decls, k)
values_start = query.find(VALUES)
if values_start < 0:
raise SQLException('Invalid INSERT (%s)' % repr(query))
values = [
i.strip() for i in query[values_start + len(VALUES) + 1:-1].split(',')
]
if len(names) != len(values):
raise SQLException(
'names(%s) != values(%s)' % (repr(names), repr(values)))
# dict lookups happen in C code, so we can't use that
kvs = {}
for k,v in zip(names, values):
for key,kval in decls.items():
if k == key:
kvs[key] = self.convert(kval, v)
table.append(kvs)
###Output
_____no_output_____
###Markdown
In SQL, a column can come in any supported data type. To ensure it is stored using the type originally declared, we need the ability to convert the values to specific types which is provided by `convert()`.
###Code
import ast
class DB(DB):
def convert(self, cast, value):
try:
return cast(ast.literal_eval(value))
except:
raise SQLException('Invalid Conversion %s(%s)' % (cast, value))
###Output
_____no_output_____
###Markdown
Here is an example of how to use the SQL `insert` command:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.table('inventory')
###Output
_____no_output_____
###Markdown
With the database filled, we can also run more complex queries:
###Code
db.sql('select year + 1, kind from inventory')
db.sql('select year, kind from inventory where year == 1997')
###Output
_____no_output_____
###Markdown
Updating DataSimilarly, `do_update()` handles SQL `update` statements.
###Code
class DB(DB):
def do_update(self, query):
SET, WHERE = ' set ', ' where '
table_end = query.find(SET)
if table_end < 0:
raise SQLException('Invalid UPDATE (%s)' % repr(query))
set_end = table_end + 5
t_name = query[:table_end]
decls, table = self.table(t_name)
names_end = query.find(WHERE)
if names_end >= 0:
names = query[set_end:names_end]
where = query[names_end + len(WHERE):]
else:
names = query[set_end:]
where = ''
sets = [[i.strip() for i in name.split('=')]
for name in names.split(',')]
# verify columns exist
for k, v in sets:
self.column(decls, k)
if where:
selected = self.expression_clause(table, "(%s)" % where)
updated = [hm for i, d, hm in selected if d]
else:
updated = table
for hm in updated:
for k, v in sets:
# we can not do dict lookups because it is implemented in C.
for key, kval in decls.items():
if key == k:
hm[key] = self.convert(kval, v)
return "%d records were updated" % len(updated)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can update things:
###Code
db.sql('update inventory set year = 1998 where year == 1997')
db.sql('select year from inventory')
db.table('inventory')
###Output
_____no_output_____
###Markdown
Deleting DataFinally, SQL `delete` statements are handled by `do_delete()`.
###Code
class DB(DB):
def do_delete(self, query):
WHERE = ' where '
table_end = query.find(WHERE)
if table_end < 0:
raise SQLException('Invalid DELETE (%s)' % query)
t_name = query[:table_end].strip()
_, table = self.table(t_name)
where = query[table_end + len(WHERE):]
selected = self.expression_clause(table, "%s" % where)
deleted = [i for i, d, hm in selected if d]
for i in sorted(deleted, reverse=True):
del table[i]
return "%d records were deleted" % len(deleted)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can delete data:
###Code
db.sql('delete from inventory where company == "Ford"')
###Output
_____no_output_____
###Markdown
Our database is now empty:
###Code
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
End of Excursion Here is how our database can be used.
###Code
db = DB()
###Output
_____no_output_____
###Markdown
We first create a table in our database with the correct data types.
###Code
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
###Output
_____no_output_____
###Markdown
Here is a simple convenience function to update the table using our dataset.
###Code
def update_inventory(sqldb, vehicle):
inventory_def = sqldb.db['inventory'][0]
k, v = zip(*inventory_def.items())
val = [repr(cast(val)) for cast, val in zip(v, vehicle.split(','))]
sqldb.sql('insert into inventory (%s) values (%s)' % (','.join(k),
','.join(val)))
for V in VEHICLES:
update_inventory(db, V)
###Output
_____no_output_____
###Markdown
Our database now contains the same dataset as `VEHICLES` under `INVENTORY` table.
###Code
db.db
###Output
_____no_output_____
###Markdown
Here is a sample select statement.
###Code
db.sql('select year,kind from inventory')
db.sql("select company,model from inventory where kind == 'car'")
###Output
_____no_output_____
###Markdown
We can run updates on it.
###Code
db.sql("update inventory set year = 1998, company = 'Suzuki' where kind == 'van'")
db.db
###Output
_____no_output_____
###Markdown
It can even do mathematics on the fly!
###Code
db.sql('select int(year)+10 from inventory')
###Output
_____no_output_____
###Markdown
Adding a new row to our table.
###Code
db.sql("insert into inventory (year, kind, company, model) values (1, 'charriot', 'Rome', 'Quadriga')")
db.db
###Output
_____no_output_____
###Markdown
Which we then delete.
###Code
db.sql("delete from inventory where year < 1900")
###Output
_____no_output_____
###Markdown
Fuzzing SQLTo verify that everything is OK, let us fuzz. First we define our grammar. Excursion: Defining a SQL grammar
###Code
import string
from Grammars import START_SYMBOL, Grammar, Expansion, \
is_valid_grammar, extend_grammar
EXPR_GRAMMAR: Grammar = {
"<start>": ["<expr>"],
"<expr>": ["<bexpr>", "<aexpr>", "(<expr>)", "<term>"],
"<bexpr>": [
"<aexpr><lt><aexpr>",
"<aexpr><gt><aexpr>",
"<expr>==<expr>",
"<expr>!=<expr>",
],
"<aexpr>": [
"<aexpr>+<aexpr>", "<aexpr>-<aexpr>", "<aexpr>*<aexpr>",
"<aexpr>/<aexpr>", "<word>(<exprs>)", "<expr>"
],
"<exprs>": ["<expr>,<exprs>", "<expr>"],
"<lt>": ["<"],
"<gt>": [">"],
"<term>": ["<number>", "<word>"],
"<number>": ["<integer>.<integer>", "<integer>", "-<number>"],
"<integer>": ["<digit><integer>", "<digit>"],
"<word>": ["<word><letter>", "<word><digit>", "<letter>"],
"<digit>":
list(string.digits),
"<letter>":
list(string.ascii_letters + '_:.')
}
assert is_valid_grammar(EXPR_GRAMMAR)
PRINTABLE_CHARS: List[str] = [i for i in string.printable
if i not in "<>'\"\t\n\r\x0b\x0c\x00"] + ['<lt>', '<gt>']
INVENTORY_GRAMMAR = extend_grammar(EXPR_GRAMMAR,
{
'<start>': ['<query>'],
'<query>': [
'select <exprs> from <table>',
'select <exprs> from <table> where <bexpr>',
'insert into <table> (<names>) values (<literals>)',
'update <table> set <assignments> where <bexpr>',
'delete from <table> where <bexpr>',
],
'<table>': ['<word>'],
'<names>': ['<column>,<names>', '<column>'],
'<column>': ['<word>'],
'<literals>': ['<literal>', '<literal>,<literals>'],
'<literal>': ['<number>', "'<chars>'"],
'<assignments>': ['<kvp>,<assignments>', '<kvp>'],
'<kvp>': ['<column>=<value>'],
'<value>': ['<word>'],
'<chars>': ['<char>', '<char><chars>'],
'<char>': PRINTABLE_CHARS, # type: ignore
})
assert is_valid_grammar(INVENTORY_GRAMMAR)
###Output
_____no_output_____
###Markdown
As can be seen from the source of our database, the functions always check whether the table name is correct. Hence, we modify the grammar to choose our particular table so that it will have a better chance of reaching deeper. We will see in the later sections how this can be done automatically.
###Code
INVENTORY_GRAMMAR_F = extend_grammar(INVENTORY_GRAMMAR,
{'<table>': ['inventory']})
###Output
_____no_output_____
###Markdown
End of Excursion
###Code
from GrammarFuzzer import GrammarFuzzer
gf = GrammarFuzzer(INVENTORY_GRAMMAR_F)
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = db.sql(query)
print(repr(res))
except SQLException as e:
print("> ", e)
pass
except:
traceback.print_exc()
break
print()
###Output
'select O6fo,-977091.1,-36.46 from inventory'
> Invalid WHERE ('(O6fo,-977091.1,-36.46)')
'select g3 from inventory where -3.0!=V/g/b+Q*M*G'
> Invalid WHERE ('(-3.0!=V/g/b+Q*M*G)')
'update inventory set z=a,x=F_,Q=K where p(M)<_*S'
> Column ('z') was not found
'update inventory set R=L5pk where e*l*y-u>K+U(:)'
> Column ('R') was not found
'select _/d*Q+H/d(k)<t+M-A+P from inventory'
> Invalid WHERE ('(_/d*Q+H/d(k)<t+M-A+P)')
'select F5 from inventory'
> Invalid WHERE ('(F5)')
'update inventory set jWh.=a6 where wcY(M)>IB7(i)'
> Column ('jWh.') was not found
'update inventory set U=y where L(W<c,(U!=W))<V(((q)==m<F),O,l)'
> Column ('U') was not found
'delete from inventory where M/b-O*h*E<H-W>e(Y)-P'
> Invalid WHERE ('M/b-O*h*E<H-W>e(Y)-P')
'select ((kP(86)+b*S+J/Z/U+i(U))) from inventory'
> Invalid WHERE ('(((kP(86)+b*S+J/Z/U+i(U))))')
###Markdown
Fuzzing does not seem to have triggered any crashes. However, are crashes the only errors that we should be worried about? The Evil of EvalIn our database implementation – notably in the `expression_clause()` method -, we have made use of `eval()` to evaluate expressions using the Python interpreter. This allows us to unleash the full power of Python expressions within our SQL statements.
###Code
db.sql('select year from inventory where year < 2000')
###Output
_____no_output_____
###Markdown
In the above query, the clause `year < 2000` is evaluated using `expression_clause()` using Python in the context of each row; hence, `year < 2000` evaluates to either `True` or `False`. The same holds for the expressions being `select`ed:
###Code
db.sql('select year - 1900 if year < 2000 else year - 2000 from inventory')
###Output
_____no_output_____
###Markdown
This works because `year - 1900 if year < 2000 else year - 2000` is a valid Python expression. (It is not a valid SQL expression, though.) The problem with the above is that there is _no limitation_ to what the Python expression can do. What if the user tries the following?
###Code
db.sql('select __import__("os").popen("pwd").read() from inventory')
###Output
_____no_output_____
###Markdown
The above statement effectively reads from the users' file system. Instead of `os.popen("pwd").read()`, it could execute arbitrary Python commands – to access data, install software, run a background process. This is where "the full power of Python expressions" turns back on us. What we want is to allow our _program_ to make full use of its power; yet, the _user_ (or any third party) should not be entrusted to do the same. Hence, we need to differentiate between (trusted) _input from the program_ and (untrusted) _input from the user_. One method that allows such differentiation is that of *dynamic taint analysis*. The idea is to identify the functions that accept user input as *sources* that *taint* any string that comes in through them, and those functions that perform dangerous operations as *sinks*. Finally we bless certain functions as *taint sanitizers*. The idea is that an input from the source should never reach the sink without undergoing sanitization first. This allows us to use a stronger oracle than simply checking for crashes. Tracking String TaintsThere are various levels of taint tracking that one can perform. The simplest is to track that a string fragment originated in a specific environment, and has not undergone a taint removal process. For this, we simply need to wrap the original string with an environment identifier (the _taint_) with `tstr`, and produce `tstr` instances on each operation that results in another string fragment. The attribute `taint` holds a label identifying the environment this instance was derived. A Class for Tainted StringsFor capturing information flows we need a new string class. The idea is to use the new tainted string class `tstr` as a wrapper on the original `str` class. However, `str` is an *immutable* class. Hence, it does not call its `__init__()` method after being constructed. This means that any subclasses of `str` also will not get the `__init__()` method called. If we want to get our initialization routine called, we need to [hook into `__new__()`](https://docs.python.org/3/reference/datamodel.htmlbasic-customization) and return an instance of our own class. We combine this with our initialization code in `__init__()`.
###Code
class tstr(str):
"""Wrapper for strings, saving taint information"""
def __new__(cls, value, *args, **kw):
"""Create a tstr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `tstr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings."""
self.taint: Any = taint
class tstr(tstr):
def __repr__(self) -> tstr:
"""Return a representation."""
return tstr(str.__repr__(self), taint=self.taint)
class tstr(tstr):
def __str__(self) -> str:
"""Convert to string"""
return str.__str__(self)
###Output
_____no_output_____
###Markdown
For example, if we wrap `"hello"` in `tstr`, then we should be able to access its taint:
###Code
thello: tstr = tstr('hello', taint='LOW')
thello.taint
repr(thello).taint # type: ignore
###Output
_____no_output_____
###Markdown
By default, when we wrap a string, it is tainted. Hence we also need a way to clear the taint in the string. One way is to simply return a `str` instance as above. However, one may sometimes wish to remove the taint from an existing instance. This is accomplished with `clear_taint()`. During `clear_taint()`, we simply set the taint to `None`. This method comes with a pair method `has_taint()` which checks whether a `tstr` instance is currently origined.
###Code
class tstr(tstr):
def clear_taint(self):
"""Remove taint"""
self.taint = None
return self
def has_taint(self):
"""Check if taint is present"""
return self.taint is not None
###Output
_____no_output_____
###Markdown
String OperatorsTo propagate the taint, we have to extend string functions, such as operators. We can do so in one single big step, overloading all string methods and operators. When we create a new string from an existing tainted string, we propagate its taint.
###Code
class tstr(tstr):
def create(self, s):
return tstr(s, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_str_wrapper()` function creates a wrapper around an existing string method which attaches the taint to the result of the method:
###Code
class tstr(tstr):
@staticmethod
def make_str_wrapper(fun):
"""Make `fun` (a `str` method) a method in `tstr`"""
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
return self.create(res)
if hasattr(fun, '__doc__'):
# Copy docstring
proxy.__doc__ = fun.__doc__
return proxy
###Output
_____no_output_____
###Markdown
We do this for all string methods that return a string:
###Code
def informationflow_init_1():
for name in ['__format__', '__mod__', '__rmod__', '__getitem__',
'__add__', '__mul__', '__rmul__',
'capitalize', 'casefold', 'center', 'encode',
'expandtabs', 'format', 'format_map', 'join',
'ljust', 'lower', 'lstrip', 'replace',
'rjust', 'rstrip', 'strip', 'swapcase', 'title', 'translate', 'upper']:
fun = getattr(str, name)
setattr(tstr, name, tstr.make_str_wrapper(fun))
informationflow_init_1()
INITIALIZER_LIST = [informationflow_init_1]
def initialize():
for fn in INITIALIZER_LIST:
fn()
###Output
_____no_output_____
###Markdown
The one missing operator is `+` with a regular string on the left side and a tainted string on the right side. Python supports a `__radd__()` method which is invoked if the associated object is used on the right side of an addition.
###Code
class tstr(tstr):
def __radd__(self, value):
"""Return value + self, as a `tstr` object"""
return self.create(value + str(self))
###Output
_____no_output_____
###Markdown
With this, we are already done. Let us create a string `thello` with a taint `LOW`.
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any substring will also be tainted:
###Code
thello[0].taint # type: ignore
thello[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
String additions will return a `tstr` object with the taint:
###Code
(tstr('foo', taint='HIGH') + 'bar').taint # type: ignore
###Output
_____no_output_____
###Markdown
Our `__radd__()` method ensures this also works if the `tstr` occurs on the right side of a string addition:
###Code
('foo' + tstr('bar', taint='HIGH')).taint # type: ignore
thello += ', world' # type: ignore
thello.taint # type: ignore
###Output
_____no_output_____
###Markdown
Other operators such as multiplication also work:
###Code
(thello * 5).taint # type: ignore
('hw %s' % thello).taint # type: ignore
(tstr('hello %s', taint='HIGH') % 'world').taint # type: ignore
###Output
_____no_output_____
###Markdown
Tracking Untrusted InputSo, what can one do with tainted strings? We reconsider the `DB` example. We define a "better" `TrustedDB` which only accepts strings tainted as `"TRUSTED"`.
###Code
class TrustedDB(DB):
def sql(self, s):
assert isinstance(s, tstr), "Need a tainted string"
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
return super().sql(s)
###Output
_____no_output_____
###Markdown
Feeding a string with an "unknown" (i.e., non-existing) trust level will cause `TrustedDB` to fail:
###Code
bdb = TrustedDB(db.db)
from ExpectError import ExpectError
with ExpectError():
bdb.sql("select year from INVENTORY")
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_10433/3935989889.py", line 2, in <module>
bdb.sql("select year from INVENTORY")
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_10433/995123203.py", line 3, in sql
assert isinstance(s, tstr), "Need a tainted string"
AssertionError: Need a tainted string (expected)
###Markdown
Additionally any user input would be originally tagged with `"UNTRUSTED"` as taint. If we place an untrusted string into our better calculator, it will also fail:
###Code
bad_user_input = tstr('__import__("os").popen("ls").read()', taint='UNTRUSTED')
with ExpectError():
bdb.sql(bad_user_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_10433/3307042773.py", line 3, in <module>
bdb.sql(bad_user_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_10433/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
Hence, somewhere along the computation, we have to turn the "untrusted" inputs into "trusted" strings. This process is called *sanitization*. A simple sanitization function for our purposes could ensure that the input consists only of few allowed characters (not including letters or quotes); if this is the case, then the input gets a new `"TRUSTED"` taint. If not, we turn the string into an (untrusted) empty string; other alternatives would be to raise an error or to escape or delete "untrusted" characters.
###Code
import re
def sanitize(user_input):
assert isinstance(user_input, tstr)
if re.match(
r'^select +[-a-zA-Z0-9_, ()]+ from +[-a-zA-Z0-9_, ()]+$', user_input):
return tstr(user_input, taint='TRUSTED')
else:
return tstr('', taint='UNTRUSTED')
good_user_input = tstr("select year,model from inventory", taint='UNTRUSTED')
sanitized_input = sanitize(good_user_input)
sanitized_input
sanitized_input.taint
bdb.sql(sanitized_input)
###Output
_____no_output_____
###Markdown
Let us now try out our untrusted input:
###Code
sanitized_input = sanitize(bad_user_input)
sanitized_input
sanitized_input.taint
with ExpectError():
bdb.sql(sanitized_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_10433/249000876.py", line 2, in <module>
bdb.sql(sanitized_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_10433/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
In a similar fashion, we can prevent SQL and code injections discussed in [the chapter on Web fuzzing](WebFuzzer.ipynb). Taint Aware FuzzingWe can also use tainting to _direct fuzzing to those grammar rules that are likely to generate dangerous inputs._ The idea here is to identify inputs generated by our fuzzer that lead to untrusted execution. First we define the exception to be thrown when a tainted value reaches a dangerous operation.
###Code
class Tainted(Exception):
def __init__(self, v):
self.v = v
def __str__(self):
return 'Tainted[%s]' % self.v
###Output
_____no_output_____
###Markdown
TaintedDBNext, since `my_eval()` is the most dangerous operation in the `DB` class, we define a new class `TaintedDB` that overrides the `my_eval()` to throw an exception whenever an untrusted string reaches this part.
###Code
class TaintedDB(DB):
def my_eval(self, statement, g, l):
if statement.taint != 'TRUSTED':
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
We initialize an instance of `TaintedDB`
###Code
tdb = TaintedDB()
tdb.db = db.db
###Output
_____no_output_____
###Markdown
Then we start fuzzing.
###Code
import traceback
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = tdb.sql(tstr(query, taint='UNTRUSTED'))
print(repr(res))
except SQLException as e:
pass
except Tainted as e:
print("> ", e)
except:
traceback.print_exc()
break
print()
###Output
'delete from inventory where y/u-l+f/y<Y(c)/A-H*q'
> Tainted[y/u-l+f/y<Y(c)/A-H*q]
"insert into inventory (G,Wmp,sl3hku3) values ('<','?')"
"insert into inventory (d0) values (',_G')"
'select P*Q-w/x from inventory where X<j==:==j*r-f'
> Tainted[(X<j==:==j*r-f)]
'select a>F*i from inventory where Q/I-_+P*j>.'
> Tainted[(Q/I-_+P*j>.)]
'select (V-i<T/g) from inventory where T/r/G<FK(m)/(i)'
> Tainted[(T/r/G<FK(m)/(i))]
'select (((i))),_(S,_)/L-k<H(Sv,R,n,W,Y) from inventory'
> Tainted[((((i))),_(S,_)/L-k<H(Sv,R,n,W,Y))]
'select (N==c*U/P/y),i-e/n*y,T!=w,u from inventory'
> Tainted[((N==c*U/P/y),i-e/n*y,T!=w,u)]
'update inventory set _=B,n=v where o-p*k-J>T'
'select s from inventory where w4g4<.m(_)/_>t'
> Tainted[(w4g4<.m(_)/_>t)]
###Markdown
One can see that `insert`, `update`, `select` and `delete` statements on an existing table lead to taint exceptions. We can now focus on these specific kinds of inputs. However, this is not the only thing we can do. We will see how we can identify specific portions of input that reached tainted execution using character origins in the later sections. But before that, we explore other uses of taints. Preventing Privacy LeaksUsing taints, we can also ensure that secret information does not leak out. We can assign a special taint `"SECRET"` to strings whose information must not leak out:
###Code
secrets = tstr('<Plenty of secret keys>', taint='SECRET')
###Output
_____no_output_____
###Markdown
Accessing any substring of `secrets` will propagate the taint:
###Code
secrets[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
Consider the _heartbeat_ security leak from [the chapter on Fuzzing](Fuzzer.ipynb), in which a server would accidentally reply not only the user input sent to it, but also secret memory. If the reply consists only of the user input, there is no taint associated with it:
###Code
user_input = "hello"
reply = user_input
isinstance(reply, tstr)
###Output
_____no_output_____
###Markdown
If, however, the reply contains _any_ part of the secret, the reply will be tainted:
###Code
reply = user_input + secrets[0:5]
reply
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
The output function of our server would now ensure that the data sent back does not contain any secret information:
###Code
def send_back(s):
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
...
with ExpectError():
send_back(reply)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_10433/3747050841.py", line 2, in <module>
send_back(reply)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_10433/3158733057.py", line 2, in send_back
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
AssertionError (expected)
###Markdown
Our `tstr` solution can help to identify information leaks – but it is by no means complete. If we actually take the `heartbeat()` implementation from [the chapter on Fuzzing](Fuzzer.ipynb), we will see that _any_ reply is marked as `SECRET` – even those not even accessing secret memory:
###Code
from Fuzzer import heartbeat
reply = heartbeat('hello', 5, memory=secrets)
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
Why is this? If we look into the implementation of `heartbeat()`, we will see that it first builds a long string `memory` from the (non-secret) reply and the (secret) memory, before returning the first characters from `memory`.```python Store reply in memory memory = reply + memory[len(reply):]```At this point, the whole memory still is tainted as `SECRET`, _including_ the non-secret part from `reply`. We may be able to circumvent the issue by tagging the `reply` as `PUBLIC` – but then, this taint would be in conflict with the `SECRET` tag of `memory`. What happens if we compose a string from two differently tainted strings?
###Code
thilo = tstr("High", taint='HIGH') + tstr("Low", taint='LOW')
###Output
_____no_output_____
###Markdown
It turns out that in this case, the `__add__()` method takes precedence over the `__radd__()` method, which means that the right-hand `"Low"` string is treated as a regular (non-tainted) string.
###Code
thilo
thilo.taint # type: ignore
###Output
_____no_output_____
###Markdown
We could set up the `__add__()` and other methods with special handling for conflicting taints. However, the way this conflict should be resolved would be highly _application-dependent_:* If we use taints to indicate _privacy levels_, `SECRET` privacy should take precedence over `PUBLIC` privacy. Any combination of a `SECRET`-tainted string and a `PUBLIC`-tainted string thus should have a `SECRET` taint.* If we use taints to indicate _origins_ of information, an `UNTRUSTED` origin should take precedence over a `TRUSTED` origin. Any combination of an `UNTRUSTED`-tainted string and a `TRUSTED`-tainted string thus should have an `UNTRUSTED` taint.Of course, such conflict resolutions can be implemented. But even so, they will not help us in the `heartbeat()` example differentiating secret from non-secret output data. Tracking Individual CharactersFortunately, there is a better, more generic way to solve the above problems. The key to composition of differently tainted strings is to assign taints not only to strings, but actually to every bit of information – in our case, characters. If every character has a taint on its own, a new composition of characters will simply inherit this very taint _per character_. To this end, we introduce a second bit of information named _origin_. Distinguishing various untrusted sources may be accomplished by origining each instance as separate instance (called *colors* in dynamic origin research). You will see an instance of this technique in the chapter on [Grammar Mining](GrammarMiner.ipynb). In this section, we carry *character level* origins. That is, given a fragment that resulted from a portion of the original origined string, one will be able to tell which portion of the input string the fragment was taken from. In essence, each input character index from an origined source gets its own color. More complex origining such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter. A Class for Tracking Character OriginsLet us introduce a class `ostr` which, like `tstr`, carries a taint for each string, and additionally an _origin_ for each character that indicates its source. It is a consecutive number in a particular range (by default, starting with zero) indicating its _position_ within a specific origin.
###Code
class ostr(str):
"""Wrapper for strings, saving taint and origin information"""
DEFAULT_ORIGIN = 0
def __new__(cls, value, *args, **kw):
"""Create an ostr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None,
origin: Optional[Union[int, List[int]]] = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `ostr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings.
`origin` (optional) is either
- an integer denoting the index of the first character in `value`, or
- a list of integers denoting the origins of the characters in `value`,
"""
self.taint = taint
if origin is None:
origin = ostr.DEFAULT_ORIGIN
if isinstance(origin, int):
self.origin = list(range(origin, origin + len(self)))
else:
self.origin = origin
assert len(self.origin) == len(self)
###Output
_____no_output_____
###Markdown
As with `tstr`, above, we implement methods for conversion into (regular) Python strings:
###Code
class ostr(ostr):
def create(self, s):
return ostr(s, taint=self.taint, origin=self.origin)
class ostr(ostr):
UNKNOWN_ORIGIN = -1
def __repr__(self):
# handle escaped chars
origin = [ostr.UNKNOWN_ORIGIN]
for s, o in zip(str(self), self.origin):
origin.extend([o] * (len(repr(s)) - 2))
origin.append(ostr.UNKNOWN_ORIGIN)
return ostr(str.__repr__(self), taint=self.taint, origin=origin)
class ostr(ostr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
By default, character origins start with `0`:
###Code
othello = ostr('hello')
assert othello.origin == [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
We can also specify the starting origin as below -- `6..10`
###Code
tworld = ostr('world', origin=6)
assert tworld.origin == [6, 7, 8, 9, 10]
a = ostr("hello\tworld")
repr(a).origin # type: ignore
###Output
_____no_output_____
###Markdown
`str()` returns a `str` instance without origin or taint information:
###Code
assert type(str(othello)) == str
###Output
_____no_output_____
###Markdown
`repr()`, however, keeps the origin information for the original string:
###Code
repr(othello)
repr(othello).origin # type: ignore
###Output
_____no_output_____
###Markdown
Just as with taints, we can clear origins and check whether an origin is present:
###Code
class ostr(ostr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
class ostr(ostr):
def clear_origin(self):
self.origin = [self.UNKNOWN_ORIGIN] * len(self)
return self
def has_origin(self):
return any(origin != self.UNKNOWN_ORIGIN for origin in self.origin)
othello = ostr('Hello')
assert othello.has_origin()
othello.clear_origin()
assert not othello.has_origin()
###Output
_____no_output_____
###Markdown
In the remainder of this section, we re-implement various string methods such that they also keep track of origins. If this is too tedious for you, jump right [to the next section](Checking-Origins) which gives a number of usage examples. Excursion: Implementing String Methods CreateWe need to create new substrings that are wrapped in `ostr` objects. However, we also want to allow our subclasses to create their own instances. Hence we again provide a `create()` method that produces a new `ostr` instance.
###Code
class ostr(ostr):
def create(self, res, origin=None):
return ostr(res, taint=self.taint, origin=origin)
othello = ostr('hello', taint='HIGH')
otworld = othello.create('world', origin=6)
otworld.origin
otworld.taint
assert (othello.origin, otworld.origin) == (
[0, 1, 2, 3, 4], [6, 7, 8, 9, 10])
###Output
_____no_output_____
###Markdown
IndexIn Python, indexing is provided through `__getitem__()`. Indexing on positive integers is simple enough. However, it has two additional wrinkles. The first is that, if the index is negative, that many characters are counted from the end of the string which lies just after the last character. That is, the last character has a negative index `-1`
###Code
class ostr(ostr):
def __getitem__(self, key):
res = super().__getitem__(key)
if isinstance(key, int):
key = len(self) + key if key < 0 else key
return self.create(res, [self.origin[key]])
elif isinstance(key, slice):
return self.create(res, self.origin[key])
else:
assert False
ohello = ostr('hello', taint='HIGH')
assert (ohello[0], ohello[-1]) == ('h', 'o')
ohello[0].taint
###Output
_____no_output_____
###Markdown
The other wrinkle is that `__getitem__()` can accept a slice. We discuss this next. SlicesThe Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method, which returns a custom `iterator`.
###Code
class ostr(ostr):
def __iter__(self):
return ostr_iterator(self)
###Output
_____no_output_____
###Markdown
The `__iter__()` method requires a supporting `iterator` object. The `iterator` is used to save the state of the current iteration, which it does by keeping a reference to the original `ostr`, and the current index of iteration `_str_idx`.
###Code
class ostr_iterator():
def __init__(self, ostr):
self._ostr = ostr
self._str_idx = 0
def __next__(self):
if self._str_idx == len(self._ostr):
raise StopIteration
# calls ostr getitem should be ostr
c = self._ostr[self._str_idx]
assert isinstance(c, ostr)
self._str_idx += 1
return c
###Output
_____no_output_____
###Markdown
Bringing all these together:
###Code
thw = ostr('hello world', taint='HIGH')
thw[0:5]
assert thw[0:5].has_taint()
assert thw[0:5].has_origin()
thw[0:5].taint
thw[0:5].origin
###Output
_____no_output_____
###Markdown
Splits
###Code
def make_split_wrapper(fun):
def proxy(self, *args, **kwargs):
lst = fun(self, *args, **kwargs)
return [self.create(elem) for elem in lst]
return proxy
for name in ['split', 'rsplit', 'splitlines']:
fun = getattr(str, name)
setattr(ostr, name, make_split_wrapper(fun))
othello = ostr('hello world', taint='LOW')
othello == 'hello world'
othello.split()[0].taint # type: ignore
###Output
_____no_output_____
###Markdown
(Exercise for the reader: handle _partitions_, i.e., splitting a string by substrings) ConcatenationIf two origined strings are concatenated together, it may be desirable to transfer the origins from each to the corresponding portion of the resulting string. The concatenation of strings is accomplished by overriding `__add__()`.
###Code
class ostr(ostr):
def __add__(self, other):
if isinstance(other, ostr):
return self.create(str.__add__(self, other),
(self.origin + other.origin))
else:
return self.create(str.__add__(self, other),
(self.origin + [self.UNKNOWN_ORIGIN for i in other]))
###Output
_____no_output_____
###Markdown
Testing concatenations between two `ostr` instances:
###Code
othello = ostr("hello")
otworld = ostr("world", origin=6)
othw = othello + otworld
assert othw.origin == [0, 1, 2, 3, 4, 6, 7, 8, 9, 10] # type: ignore
###Output
_____no_output_____
###Markdown
What if a `ostr` is concatenated with a `str`?
###Code
space = " "
th_w = othello + space + otworld
assert th_w.origin == [
0,
1,
2,
3,
4,
ostr.UNKNOWN_ORIGIN,
ostr.UNKNOWN_ORIGIN,
6,
7,
8,
9,
10]
###Output
_____no_output_____
###Markdown
One wrinkle here is that when adding a `ostr` and a `str`, the user may place the `str` first, in which case, the `__add__()` method will be called on the `str` instance. Not on the `ostr` instance. However, Python provides a solution. If one defines `__radd__()` on the `ostr` instance, that method will be called rather than `str.__add__()`
###Code
class ostr(ostr):
def __radd__(self, other):
origin = other.origin if isinstance(other, ostr) else [
self.UNKNOWN_ORIGIN for i in other]
return self.create(str.__add__(other, self), (origin + self.origin))
###Output
_____no_output_____
###Markdown
We test it out:
###Code
shello = "hello"
otworld = ostr("world")
thw = shello + otworld
assert thw.origin == [ostr.UNKNOWN_ORIGIN] * len(shello) + [0, 1, 2, 3, 4] # type: ignore
###Output
_____no_output_____
###Markdown
These methods: `slicing` and `concatenation` is sufficient to implement other string methods that result in a string, and does not change the character underneath (i.e no case change). Hence, we look at a helper method next. Extract Origin StringGiven a specific input index, the method `x()` extracts the corresponding origined portion from a `ostr`. As a convenience it supports `slices` along with `ints`.
###Code
class ostr(ostr):
class TaintException(Exception):
pass
def x(self, i=0):
"""Extract substring at index/slice `i`"""
if not self.origin:
raise origin.TaintException('Invalid request idx')
if isinstance(i, int):
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j == i]]
elif isinstance(i, slice):
r = range(i.start or 0, i.stop or len(self), i.step or 1)
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j in r]]
thw = ostr('hello world', origin=100)
assert thw.x(101) == ['e']
assert thw.x(slice(101, 105)) == ['e', 'l', 'l', 'o']
###Output
_____no_output_____
###Markdown
Replace The `replace()` method replaces a portion of the string with another.
###Code
class ostr(ostr):
def replace(self, a, b, n=None):
old_origin = self.origin
b_origin = b.origin if isinstance(
b, ostr) else [self.UNKNOWN_ORIGIN] * len(b)
mystr = str(self)
i = 0
while True:
if n and i >= n:
break
idx = mystr.find(a)
if idx == -1:
break
last = idx + len(a)
mystr = mystr.replace(a, b, 1)
partA, partB = old_origin[0:idx], old_origin[last:]
old_origin = partA + b_origin + partB
i += 1
return self.create(mystr, old_origin)
my_str = ostr("aa cde aa")
res = my_str.replace('aa', 'bb')
assert res, res.origin == ('bb', 'cde', 'bb',
[ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN,
2, 3, 4, 5, 6,
ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN])
my_str = ostr("aa cde aa")
res = my_str.replace('aa', ostr('bb', origin=100))
assert (
res, res.origin) == (
('bb cde bb'), [
100, 101, 2, 3, 4, 5, 6, 100, 101])
###Output
_____no_output_____
###Markdown
Split We essentially have to re-implement split operations, and split by space is slightly different from other splits.
###Code
class ostr(ostr):
def _split_helper(self, sep, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = len(sep)
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
first_idx = last_idx + sep_len
return result_list
def _split_space(self, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = 0
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
v = str(self[last_idx:])
sep_len = len(v) - len(v.lstrip(' '))
first_idx = last_idx + sep_len
return result_list
def rsplit(self, sep=None, maxsplit=-1):
splitted = super().rsplit(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
def split(self, sep=None, maxsplit=-1):
splitted = super().split(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
my_str = ostr('ab cdef ghij kl')
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([0, 1], [3, 4, 5, 6], [8, 9, 10, 11], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 15)))
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert(ab.origin, cdef.origin, kl.origin) == ([0, 1], [3, 4, 5, 6], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=100, taint='HIGH')
ab, cdef, ghij, kl = my_str.rsplit()
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([100, 101], [105, 106, 107, 108], [110, 111, 112, 113],
[118, 119])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 20)), taint='HIGH')
ab, cdef, ghij, kl = my_str.split()
assert (ab.origin, cdef.origin, kl.origin) == ([0, 1], [5, 6, 7, 8], [18, 19])
assert ab.taint == 'HIGH'
###Output
_____no_output_____
###Markdown
Strip
###Code
class ostr(ostr):
def strip(self, cl=None):
return self.lstrip(cl).rstrip(cl)
def lstrip(self, cl=None):
res = super().lstrip(cl)
i = self.find(res)
return self[i:]
def rstrip(self, cl=None):
res = super().rstrip(cl)
return self[0:len(res)]
my_str1 = ostr(" abc ")
v = my_str1.strip()
assert v, v.origin == ('abc', [2, 3, 4])
my_str1 = ostr(" abc ")
v = my_str1.lstrip()
assert (v, v.origin) == ('abc ', [2, 3, 4, 5, 6])
my_str1 = ostr(" abc ")
v = my_str1.rstrip()
assert (v, v.origin) == (' abc', [0, 1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
Expand Tabs
###Code
class ostr(ostr):
def expandtabs(self, n=8):
parts = self.split('\t')
res = super().expandtabs(n)
all_parts = []
for i, p in enumerate(parts):
all_parts.extend(p.origin)
if i < len(parts) - 1:
l = len(all_parts) % n
all_parts.extend([p.origin[-1]] * l)
return self.create(res, all_parts)
my_s = str("ab\tcd")
my_ostr = ostr("ab\tcd")
v1 = my_s.expandtabs(4)
v2 = my_ostr.expandtabs(4)
assert str(v1) == str(v2)
assert (len(v1), repr(v2), v2.origin) == (6, "'ab cd'", [0, 1, 1, 1, 3, 4])
class ostr(ostr):
def join(self, iterable):
mystr = ''
myorigin = []
sep_origin = self.origin
lst = list(iterable)
for i, s in enumerate(lst):
sorigin = s.origin if isinstance(s, ostr) else [
self.UNKNOWN_ORIGIN] * len(s)
myorigin.extend(sorigin)
mystr += str(s)
if i < len(lst) - 1:
myorigin.extend(sep_origin)
mystr += str(self)
res = super().join(iterable)
assert len(res) == len(mystr)
return self.create(res, myorigin)
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr('').join([v2, v3, v1])
assert (
v4, v4.origin) == (
'cdefab', [
103, 104, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 100, 101])
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr(',').join([v2, v3, v1])
assert (v4, v4.origin) == ('cd,ef,ab',
[103, 104, 0, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 0, 100, 101]) # type: ignore
###Output
_____no_output_____
###Markdown
Partitions
###Code
class ostr(ostr):
def partition(self, sep):
partA, sep, partB = super().partition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
def rpartition(self, sep):
partA, sep, partB = super().rpartition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
###Output
_____no_output_____
###Markdown
Justify
###Code
class ostr(ostr):
def ljust(self, width, fillchar=' '):
res = super().ljust(width, fillchar)
initial = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, [t] * initial + self.origin)
class ostr(ostr):
def rjust(self, width, fillchar=' '):
res = super().rjust(width, fillchar)
final = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, self.origin + [t] * final)
###Output
_____no_output_____
###Markdown
mod
###Code
class ostr(ostr):
def __mod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
s_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = self.find('%s')
assert i >= 0
res = super().__mod__(s)
r_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
class ostr(ostr):
def __rmod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
r_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = s.find('%s')
assert i >= 0
res = super().__rmod__(s)
s_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
a = ostr('hello %s world', origin=100)
a
(a % 'good').origin
b = 'hello %s world'
c = ostr('bad', origin=10)
(b % c).origin
###Output
_____no_output_____
###Markdown
String methods that do not change origin
###Code
class ostr(ostr):
def swapcase(self):
return self.create(str(self).swapcase(), self.origin)
def upper(self):
return self.create(str(self).upper(), self.origin)
def lower(self):
return self.create(str(self).lower(), self.origin)
def capitalize(self):
return self.create(str(self).capitalize(), self.origin)
def title(self):
return self.create(str(self).title(), self.origin)
a = ostr('aa', origin=100).upper()
a, a.origin
###Output
_____no_output_____
###Markdown
General wrappers These are not strictly needed for operation, but can be useful for tracing.
###Code
def make_basic_str_wrapper(fun): # type: ignore
def proxy(*args, **kwargs):
res = fun(*args, **kwargs)
return res
return proxy
import inspect
import types
def informationflow_init_2():
ostr_members = [name for name, fn in inspect.getmembers(ostr, callable)
if isinstance(fn, types.FunctionType) and fn.__qualname__.startswith('ostr')]
for name, fn in inspect.getmembers(str, callable):
if name not in set(['__class__', '__new__', '__str__', '__init__',
'__repr__', '__getattribute__']) | set(ostr_members):
setattr(ostr, name, make_basic_str_wrapper(fn))
informationflow_init_2()
INITIALIZER_LIST.append(informationflow_init_2)
###Output
_____no_output_____
###Markdown
Methods yet to be translated These methods generate strings from other strings. However, we do not have the right implementations for any of these. Hence these are marked as dangerous until we can generate the right translations.
###Code
def make_str_abort_wrapper(fun):
def proxy(*args, **kwargs):
raise ostr.TaintException(
'%s Not implemented in `ostr`' %
fun.__name__)
return proxy
def informationflow_init_3():
for name, fn in inspect.getmembers(str, callable):
# Omitted 'splitlines' as this is needed for formatting output in
# IPython/Jupyter
if name in ['__format__', 'format_map', 'format',
'__mul__', '__rmul__', 'center', 'zfill', 'decode', 'encode']:
setattr(ostr, name, make_str_abort_wrapper(fn))
informationflow_init_3()
INITIALIZER_LIST.append(informationflow_init_3)
###Output
_____no_output_____
###Markdown
While generating proxy wrappers for string operations can handle most common cases of transmission of information flow, some of the operations involving strings can not be overridden. For example, consider the following. End of Excursion Checking OriginsWith all this implemented, we now have full-fledged `ostr` strings where we can easily check the origin of each and every character. To check whether a string originates from another string, we can convert the origin to a set and resort to standard set operations:
###Code
s = ostr("hello", origin=100)
s[1]
s[1].origin
set(s[1].origin) <= set(s.origin)
t = ostr("world", origin=200)
set(s.origin) <= set(t.origin)
u = s + t + "!"
u.origin
ostr.UNKNOWN_ORIGIN in u.origin
###Output
_____no_output_____
###Markdown
Privacy Leaks RevisitedLet us apply it to see whether we can come up with a satisfactory solution for checking the `heartbeat()` function against information leakage.
###Code
SECRET_ORIGIN = 1000
###Output
_____no_output_____
###Markdown
We define a "secret" that must not leak out:
###Code
secret = ostr('<again, some super-secret input>', origin=SECRET_ORIGIN)
###Output
_____no_output_____
###Markdown
Each and every character in `secret` has an origin starting with `SECRET_ORIGIN`:
###Code
print(secret.origin)
###Output
[1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031]
###Markdown
If we now invoke `heartbeat()` with a given string, the origin of the reply should all be `UNKNOWN_ORIGIN` (from the input), and none of the characters should have a `SECRET_ORIGIN`.
###Code
hello_s = heartbeat('hello', 5, memory=secret)
hello_s
assert isinstance(hello_s, ostr)
print(hello_s.origin)
###Output
[-1, -1, -1, -1, -1]
###Markdown
We can verify that the secret did not leak out by formulating appropriate assertions:
###Code
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
_____no_output_____
###Markdown
All assertions pass, again confirming that no secret leaked out. Let us now go and exploit `heartbeat()` to reveal its secrets. As `heartbeat()` is unchanged, it is as vulnerable as it was:
###Code
hello_s = heartbeat('hello', 32, memory=secret)
hello_s
###Output
_____no_output_____
###Markdown
Now, however, the reply _does_ contain secret information:
###Code
assert isinstance(hello_s, ostr)
print(hello_s.origin)
with ExpectError():
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
with ExpectError():
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
with ExpectError():
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_10433/1577803914.py", line 2, in <module>
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
AssertionError (expected)
###Markdown
We can now integrate these assertions into the `heartbeat()` function, causing it to fail before leaking information. Additionally (or alternatively?), we can also rewrite our output functions not to give out any secret information. We will leave these two exercises for the reader. Taint-Directed FuzzingThe previous _Taint Aware Fuzzing_ was a bit unsatisfactory in that we could not focus on the specific parts of the grammar that led to dangerous operations. We fix that with _taint directed fuzzing_ using `TrackingDB`.The idea here is to track the origins of each character that reaches `eval`. Then, track it back to the grammar nodes that generated it, and increase the probability of using those nodes again. TrackingDBThe `TrackingDB` is similar to `TaintedDB`. The difference is that, if we find that the execution has reached the `my_eval`, we simply raise the `Tainted`.
###Code
class TrackingDB(TaintedDB):
def my_eval(self, statement, g, l):
if statement.origin:
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
Next, we need a specially crafted fuzzer that preserves the taints. TaintedGrammarFuzzerWe define a `TaintedGrammarFuzzer` class that ensures that the taints propagate to the derivation tree. This is similar to the `GrammarFuzzer` from the [chapter on grammar fuzzers](GrammarFuzzer.ipynb) except that the origins and taints are preserved.
###Code
import random
from GrammarFuzzer import GrammarFuzzer
from Parser import canonical
class TaintedGrammarFuzzer(GrammarFuzzer):
def __init__(self,
grammar,
start_symbol=START_SYMBOL,
expansion_switch=1,
log=False):
self.tainted_start_symbol = ostr(
start_symbol, origin=[1] * len(start_symbol))
self.expansion_switch = expansion_switch
self.log = log
self.grammar = grammar
self.c_grammar = canonical(grammar)
self.init_tainted_grammar()
def expansion_cost(self, expansion, seen=set()):
symbols = [e for e in expansion if e in self.c_grammar]
if len(symbols) == 0:
return 1
if any(s in seen for s in symbols):
return float('inf')
return sum(self.symbol_cost(s, seen) for s in symbols) + 1
def fuzz_tree(self):
tree = (self.tainted_start_symbol, [])
nt_leaves = [tree]
expansion_trials = 0
while nt_leaves:
idx = random.randint(0, len(nt_leaves) - 1)
key, children = nt_leaves[idx]
expansions = self.ct_grammar[key]
if expansion_trials < self.expansion_switch:
expansion = random.choice(expansions)
else:
costs = [self.expansion_cost(e) for e in expansions]
m = min(costs)
all_min = [i for i, c in enumerate(costs) if c == m]
expansion = expansions[random.choice(all_min)]
new_leaves = [(token, []) for token in expansion]
new_nt_leaves = [e for e in new_leaves if e[0] in self.ct_grammar]
children[:] = new_leaves
nt_leaves[idx:idx + 1] = new_nt_leaves
if self.log:
print("%-40s" % (key + " -> " + str(expansion)))
expansion_trials += 1
return tree
def fuzz(self):
self.derivation_tree = self.fuzz_tree()
return self.tree_to_string(self.derivation_tree)
###Output
_____no_output_____
###Markdown
We use a specially prepared tainted grammar for fuzzing. We mark each individual definition, each individual rule, and each individual token with a separate origin (we chose a token boundary of 10 here, after inspecting the grammar). This allows us to track exactly which parts of the grammar were involved in the operations we are interested in.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def init_tainted_grammar(self):
key_increment, alt_increment, token_increment = 1000, 100, 10
key_origin = key_increment
self.ct_grammar = {}
for key, val in self.c_grammar.items():
key_origin += key_increment
os = []
for v in val:
ts = []
key_origin += alt_increment
for t in v:
nt = ostr(t, origin=key_origin)
key_origin += token_increment
ts.append(nt)
os.append(ts)
self.ct_grammar[key] = os
# a use tracking grammar
self.ctp_grammar = {}
for key, val in self.ct_grammar.items():
self.ctp_grammar[key] = [(v, dict(use=0)) for v in val]
###Output
_____no_output_____
###Markdown
As before, we initialize the `TrackingDB`
###Code
trdb = TrackingDB(db.db)
###Output
_____no_output_____
###Markdown
Finally, we need to ensure that the taints are preserved, when the tree is converted back to a string. For this, we define the `tainted_tree_to_string()`
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def tree_to_string(self, tree):
symbol, children, *_ = tree
e = ostr('')
if children:
return e.join([self.tree_to_string(c) for c in children])
else:
return e if symbol in self.c_grammar else symbol
###Output
_____no_output_____
###Markdown
We define `update_grammar()` that accepts a set of origins that reached the dangerous operations and the derivation tree of the original string used for fuzzing to update the enhanced grammar.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def update_grammar(self, origin, dtree):
def update_tree(dtree, origin):
key, children = dtree
if children:
updated_children = [update_tree(c, origin) for c in children]
corigin = set.union(
*[o for (key, children, o) in updated_children])
corigin = corigin.union(set(key.origin))
return (key, children, corigin)
else:
my_origin = set(key.origin).intersection(origin)
return (key, [], my_origin)
key, children, oset = update_tree(dtree, set(origin))
for key, alts in self.ctp_grammar.items():
for alt, o in alts:
alt_origins = set([i for token in alt for i in token.origin])
if alt_origins.intersection(oset):
o['use'] += 1
###Output
_____no_output_____
###Markdown
With these, we are now ready to fuzz.
###Code
def tree_type(tree):
key, children = tree
return (type(key), key, [tree_type(c) for c in children])
tgf = TaintedGrammarFuzzer(INVENTORY_GRAMMAR_F)
x = None
for _ in range(10):
qtree = tgf.fuzz_tree()
query = tgf.tree_to_string(qtree)
assert isinstance(query, ostr)
try:
print(repr(query))
res = trdb.sql(query)
print(repr(res))
except SQLException as e:
print(e)
except Tainted as e:
print(e)
origin = e.args[0].origin
tgf.update_grammar(origin, qtree)
except:
traceback.print_exc()
break
print()
###Output
'select (g!=(9)!=((:)==2==9)!=J)==-7 from inventory'
Tainted[((g!=(9)!=((:)==2==9)!=J)==-7)]
'delete from inventory where ((c)==T)!=5==(8!=Y)!=-5'
Tainted[((c)==T)!=5==(8!=Y)!=-5]
'select (((w==(((X!=------8)))))) from inventory'
Tainted[((((w==(((X!=------8)))))))]
'delete from inventory where ((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))'
Tainted[((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))]
'delete from inventory where ((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))'
Tainted[((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))]
'delete from inventory where o!=2==---5==3!=t'
Tainted[o!=2==---5==3!=t]
'select (2) from inventory'
Tainted[((2))]
'select _ from inventory'
Tainted[(_)]
'select L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)) from inventory'
Tainted[(L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)))]
'delete from inventory where _==(7==(9)!=(---5)==1)==-8'
Tainted[_==(7==(9)!=(---5)==1)==-8]
###Markdown
We can now inspect our enhanced grammar to see how many times each rule was used.
###Code
tgf.ctp_grammar
###Output
_____no_output_____
###Markdown
From here, the idea is to focus on the rules that reached dangerous operations more often, and increase the probability of the values of that kind. The Limits of Taint TrackingWhile our framework can detect information leakage, it is by no means perfect. There are several ways in which taints can get lost and information thus may still leak out. ConversionsWe only track taints and origins through _strings_ and _characters_. If we convert these to numbers (or other data), the information is lost. As an example, consider this function, converting individual characters to numbers and back:
###Code
def strip_all_info(s):
t = ""
for c in s:
t += chr(ord(c))
return t
othello = ostr("Secret")
othello
othello.origin # type: ignore
###Output
_____no_output_____
###Markdown
The taints and origins will not propagate through the number conversion:
###Code
thello_stripped = strip_all_info(thello)
thello_stripped
with ExpectError():
thello_stripped.origin
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_10433/588526133.py", line 2, in <module>
thello_stripped.origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
This issue could be addressed by extending numbers with taints and origins, just as we did for strings. At some point, however, this will still break down, because as soon as an internal C function in the Python library is reached, the taint will not propagate into and across the C function. (Unless one starts implementing dynamic taints for these, that is.) Internal C libraries As we mentioned before, calls to _internal_ C libraries do not propagate taints. For example, while the following preserves the taints,
###Code
hello = ostr('hello', origin=100)
world = ostr('world', origin=200)
(hello + ' ' + world).origin
###Output
_____no_output_____
###Markdown
a call to a `join` that should be equivalent will fail.
###Code
with ExpectError():
''.join([hello, ' ', world]).origin # type: ignore
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_10433/2341342688.py", line 2, in <module>
''.join([hello, ' ', world]).origin # type: ignore
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
Implicit Information FlowEven if one could taint all data in a program, there still would be means to break information flow – notably by turning explicit flow into _implicit_ flow, or data flow into _control flow_. Here is an example:
###Code
def strip_all_info_again(s):
t = ""
for c in s:
if c == 'a':
t += 'a'
elif c == 'b':
t += 'b'
elif c == 'c':
t += 'c'
...
###Output
_____no_output_____
###Markdown
With such a function, there is no explicit data flow between the characters in `s` and the characters in `t`; yet, the strings would be identical. This problem frequently occurs in programs that process and manipulate external input. Enforcing TaintingBoth, conversions and implicit information flow are one of several possibilities how taint and origin information get lost. To address the problem, the best solution is to _always assume the worst from untainted strings_:* As it comes to trust, an untainted string should be treated as _possibly untrusted_, and hence not relied upon unless sanitized.* As it comes to privacy, an untainted string should be treated as _possibly secret_, and hence not leaked out.As a consequence, your program should always have two kinds of taints: one for explicitly trusted (or secret) and one for explicitly untrusted (or non-secret). If a taint gets lost along the way, you will may have to restore it from its sources – not unlike the string methods discussed above. The benefit is a trusted application, in which each and every information flow can be checked at runtime, with violations quickly discovered through automated tests. Synopsis This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:
###Code
thello[:4]
###Output
_____no_output_____
###Markdown
However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:
###Code
thello.taint
###Output
_____no_output_____
###Markdown
The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:
###Code
thello[1:2].taint # type: ignore
###Output
_____no_output_____
###Markdown
`tstr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
from ClassDiagram import display_class_hierarchy
display_class_hierarchy(tstr)
###Output
_____no_output_____
###Markdown
Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:
###Code
secret = ostr("joshua1234", origin=100, taint='SECRET')
###Output
_____no_output_____
###Markdown
The `origin` attribute of an `ostr` provides access to a list of indexes:
###Code
secret.origin
secret.taint
###Output
_____no_output_____
###Markdown
`ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:
###Code
secret_substr = (secret[0:4] + "-" + secret[6:])
secret_substr.taint
secret_substr.origin
###Output
_____no_output_____
###Markdown
`ostr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
display_class_hierarchy(ostr)
###Output
_____no_output_____
###Markdown
Lessons Learned* String-based and character-based taints allow to dynamically track the information flow from input to the internals of a system and back to the output.* Checking taints allows to discover untrusted inputs and information leakage at runtime.* Data conversions and implicit data flow may strip taint information; the resulting untainted strings should be treated as having the worst possible taint.* Taints can be used in conjunction with fuzzing to provide a more robust indication of incorrect behavior than to simply rely on program crashes. Next StepsAn even better alternative to our taint-directed fuzzing is to make use of _symbolic_ techniques that take the semantics of the program under test into account. The chapter on [flow fuzzing](FlowFuzzer.ipynb) introduces these symbolic techniques for the purpose of exploring information flows; the subsequent chapter on [symbolic fuzzing](SymbolicFuzzer.ipynb) then shows how to make full-fledged use of symbolic execution for covering code. Similarly, [search based fuzzing](SearchBasedFuzzer.ipynb) can often provide a cheaper exploration strategy. BackgroundTaint analysis on Python using a library approach as we implemented in this chapter was discussed by Conti et al. \cite{Conti2010}. Exercises Exercise 1: Tainted NumbersIntroduce a class `tint` (for tainted integer) that, like `tstr`, has a taint attribute that gets passed on from `tint` to `tint`. Part 1: CreationImplement the `tint` class such that taints are set:```pythonx = tint(42, taint='SECRET')assert x.taint == 'SECRET'``` **Solution.** This is pretty straightforward, as we can apply the same scheme as for `tstr`:
###Code
class tint(int):
def __new__(cls, value, *args, **kw):
return int.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
x = tint(42, taint='SECRET')
assert x.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 2: Arithmetic expressionsEnsure that taints get passed along arithmetic expressions; support addition, subtraction, multiplication, and division operators.```pythony = x + 1assert y.taint == 'SECRET'``` **Solution.** As with `tstr`, we implement a `create()` method and a convenience function to quickly define all arithmetic operations:
###Code
class tint(tint):
def create(self, n):
# print("New tint from", n)
return tint(n, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_int_wrapper()` function creates a wrapper around an existing `int` method which attaches the taint to the result of the method:
###Code
def make_int_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
# print(fun, args, kwargs, "=", repr(res))
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all arithmetic operators:
###Code
for name in ['__add__', '__radd__', '__mul__', '__rmul__', '__sub__',
'__floordiv__', '__truediv__']:
fun = getattr(int, name)
setattr(tint, name, make_int_wrapper(fun))
x = tint(42, taint='SECRET')
y = x + 1
y.taint # type: ignore
###Output
_____no_output_____
###Markdown
Part 3: Passing taints from integers to stringsConverting a tainted integer into a string (using `repr()`) should yield a tainted string:```pythonx_s = repr(x)assert x_s.taint == 'SECRET'``` **Solution.** We define the string conversion functions such that they return a tainted string (`tstr`):
###Code
class tint(tint):
def __repr__(self) -> tstr:
s = int.__repr__(self)
return tstr(s, taint=self.taint)
class tint(tint):
def __str__(self) -> tstr:
return tstr(int.__str__(self), taint=self.taint)
x = tint(42, taint='SECRET')
x_s = repr(x)
assert isinstance(x_s, tstr)
assert x_s.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 4: Passing taints from strings to integersConverting a tainted object (with a `taint` attribute) to an integer should pass that taint:```pythonpassword = tstr('1234', taint='NOT_EXACTLY_SECRET')x = tint(password)assert x == 1234assert x.taint == 'NOT_EXACTLY_SECRET'``` **Solution.** This can be done by having the `__init__()` constructor check for a `taint` attibute:
###Code
class tint(tint):
def __init__(self, value, taint=None, **kwargs):
if taint is not None:
self.taint = taint
else:
self.taint = getattr(value, 'taint', None)
password = tstr('1234', taint='NOT_EXACTLY_SECRET')
x = tint(password)
assert x == 1234
assert x.taint == 'NOT_EXACTLY_SECRET'
###Output
_____no_output_____
###Markdown
The table can be retrieved using the name using the `table()` method call.
###Code
class DB(DB):
def table(self, t_name):
if t_name in self.db:
return self.db[t_name]
raise SQLException('Table (%s) was not found' % repr(t_name))
###Output
_____no_output_____
###Markdown
Here is an example of how to use both. We fill a table `inventory` with four columns: `year`, `kind`, `company`, and `model`. Initially, our table is empty.
###Code
def sample_db():
db = DB()
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
return db
###Output
_____no_output_____
###Markdown
Using `table()`, we can retrieve the table definition as well as its contents.
###Code
db = sample_db()
db.table('inventory')
###Output
_____no_output_____
###Markdown
We also define `column()` for retrieving the column definition from a table declaration.
###Code
class DB(DB):
def column(self, table_decl, c_name):
if c_name in table_decl:
return table_decl[c_name]
raise SQLException('Column (%s) was not found' % repr(c_name))
db = sample_db()
decl, rows = db.table('inventory')
db.column(decl, 'year')
###Output
_____no_output_____
###Markdown
Executing SQL StatementsThe `sql()` method of `DB` executes SQL statements. It inspects its arguments, and dispatches the query based on the kind of SQL statement to be executed.
###Code
class DB(DB):
def do_select(self, query):
...
def do_update(self, query):
...
def do_insert(self, query):
...
def do_delete(self, query):
...
def sql(self, query):
methods = [('select ', self.do_select),
('update ', self.do_update),
('insert into ', self.do_insert),
('delete from', self.do_delete)]
for key, method in methods:
if query.startswith(key):
return method(query[len(key):])
raise SQLException('Unknown SQL (%s)' % query)
###Output
_____no_output_____
###Markdown
Here's an example of how to use the `DB` class:
###Code
some_db = DB()
some_db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
However, at this point, the individual methods for handling SQL statements are not yet defined. Let us do this in the next steps. Excursion: Implementing SQL Statements Selecting DataThe `do_select()` method handles SQL `select` statements to retrieve data from a table.
###Code
class DB(DB):
def do_select(self, query):
FROM, WHERE = ' from ', ' where '
table_start = query.find(FROM)
if table_start < 0:
raise SQLException('no table specified')
where_start = query.find(WHERE)
select = query[:table_start]
if where_start >= 0:
t_name = query[table_start + len(FROM):where_start]
where = query[where_start + len(WHERE):]
else:
t_name = query[table_start + len(FROM):]
where = ''
_, table = self.table(t_name)
if where:
selected = self.expression_clause(table, "(%s)" % where)
selected_rows = [hm for i, data, hm in selected if data]
else:
selected_rows = table
rows = self.expression_clause(selected_rows, "(%s)" % select)
return [data for i, data, hm in rows]
###Output
_____no_output_____
###Markdown
The `expression_clause()` method is used for two purposes:1. In the form `select` $x$, $y$, $z$ `from` $t$, it _evaluates_ (and returns) the expressions $x$, $y$, $z$ in the contexts of the selected rows.2. If a clause `where` $p$ is given, it also evaluates $p$ in the context of the rows and includes the rows in the selection only if $p$ holds.To evaluate expressions like $x$, $y$, $z$ or $p$, the method `expression_clause()` makes use of the Python `eval()` evaluation function.
###Code
class DB(DB):
def expression_clause(self, table, statement):
selected = []
for i, hm in enumerate(table):
selected.append((i, self.my_eval(statement, {}, hm), hm))
return selected
###Output
_____no_output_____
###Markdown
If `eval()` fails for whatever reason, we raise an exception:
###Code
class DB(DB):
def my_eval(self, statement, g, l):
try:
return eval(statement, g, l)
except Exception:
raise SQLException('Invalid WHERE (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
**Note:** Using `eval()` here introduces some important security issues, which we will discuss later in this chapter. Here's how we can use `sql()` to issue a query. Note that the table is yet empty.
###Code
db = sample_db()
db.sql('select year from inventory')
db = sample_db()
db.sql('select year from inventory where year == 2018')
###Output
_____no_output_____
###Markdown
Inserting DataThe `do_insert()` method handles SQL `insert` statements.
###Code
class DB(DB):
def do_insert(self, query):
VALUES = ' values '
table_end = query.find('(')
t_name = query[:table_end].strip()
names_end = query.find(')')
decls, table = self.table(t_name)
names = [i.strip() for i in query[table_end + 1:names_end].split(',')]
# verify columns exist
for k in names:
self.column(decls, k)
values_start = query.find(VALUES)
if values_start < 0:
raise SQLException('Invalid INSERT (%s)' % repr(query))
values = [
i.strip() for i in query[values_start + len(VALUES) + 1:-1].split(',')
]
if len(names) != len(values):
raise SQLException(
'names(%s) != values(%s)' % (repr(names), repr(values)))
# dict lookups happen in C code, so we can't use that
kvs = {}
for k,v in zip(names, values):
for key,kval in decls.items():
if k == key:
kvs[key] = self.convert(kval, v)
table.append(kvs)
###Output
_____no_output_____
###Markdown
In SQL, a column can come in any supported data type. To ensure it is stored using the type originally declared, we need the ability to convert the values to specific types which is provided by `convert()`.
###Code
import ast
class DB(DB):
def convert(self, cast, value):
try:
return cast(ast.literal_eval(value))
except:
raise SQLException('Invalid Conversion %s(%s)' % (cast, value))
###Output
_____no_output_____
###Markdown
Here is an example of how to use the SQL `insert` command:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.table('inventory')
###Output
_____no_output_____
###Markdown
With the database filled, we can also run more complex queries:
###Code
db.sql('select year + 1, kind from inventory')
db.sql('select year, kind from inventory where year == 1997')
###Output
_____no_output_____
###Markdown
Updating DataSimilarly, `do_update()` handles SQL `update` statements.
###Code
class DB(DB):
def do_update(self, query):
SET, WHERE = ' set ', ' where '
table_end = query.find(SET)
if table_end < 0:
raise SQLException('Invalid UPDATE (%s)' % repr(query))
set_end = table_end + 5
t_name = query[:table_end]
decls, table = self.table(t_name)
names_end = query.find(WHERE)
if names_end >= 0:
names = query[set_end:names_end]
where = query[names_end + len(WHERE):]
else:
names = query[set_end:]
where = ''
sets = [[i.strip() for i in name.split('=')]
for name in names.split(',')]
# verify columns exist
for k, v in sets:
self.column(decls, k)
if where:
selected = self.expression_clause(table, "(%s)" % where)
updated = [hm for i, d, hm in selected if d]
else:
updated = table
for hm in updated:
for k, v in sets:
# we can not do dict lookups because it is implemented in C.
for key, kval in decls.items():
if key == k:
hm[key] = self.convert(kval, v)
return "%d records were updated" % len(updated)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can update things:
###Code
db.sql('update inventory set year = 1998 where year == 1997')
db.sql('select year from inventory')
db.table('inventory')
###Output
_____no_output_____
###Markdown
Deleting DataFinally, SQL `delete` statements are handled by `do_delete()`.
###Code
class DB(DB):
def do_delete(self, query):
WHERE = ' where '
table_end = query.find(WHERE)
if table_end < 0:
raise SQLException('Invalid DELETE (%s)' % query)
t_name = query[:table_end].strip()
_, table = self.table(t_name)
where = query[table_end + len(WHERE):]
selected = self.expression_clause(table, "%s" % where)
deleted = [i for i, d, hm in selected if d]
for i in sorted(deleted, reverse=True):
del table[i]
return "%d records were deleted" % len(deleted)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can delete data:
###Code
db.sql('delete from inventory where company == "Ford"')
###Output
_____no_output_____
###Markdown
Our database is now empty:
###Code
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
End of Excursion Here is how our database can be used.
###Code
db = DB()
###Output
_____no_output_____
###Markdown
We first create a table in our database with the correct data types.
###Code
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
###Output
_____no_output_____
###Markdown
Here is a simple convenience function to update the table using our dataset.
###Code
def update_inventory(sqldb, vehicle):
inventory_def = sqldb.db['inventory'][0]
k, v = zip(*inventory_def.items())
val = [repr(cast(val)) for cast, val in zip(v, vehicle.split(','))]
sqldb.sql('insert into inventory (%s) values (%s)' % (','.join(k),
','.join(val)))
for V in VEHICLES:
update_inventory(db, V)
###Output
_____no_output_____
###Markdown
Our database now contains the same dataset as `VEHICLES` under `INVENTORY` table.
###Code
db.db
###Output
_____no_output_____
###Markdown
Here is a sample select statement.
###Code
db.sql('select year,kind from inventory')
db.sql("select company,model from inventory where kind == 'car'")
###Output
_____no_output_____
###Markdown
We can run updates on it.
###Code
db.sql("update inventory set year = 1998, company = 'Suzuki' where kind == 'van'")
db.db
###Output
_____no_output_____
###Markdown
It can even do mathematics on the fly!
###Code
db.sql('select int(year)+10 from inventory')
###Output
_____no_output_____
###Markdown
Adding a new row to our table.
###Code
db.sql("insert into inventory (year, kind, company, model) values (1, 'charriot', 'Rome', 'Quadriga')")
db.db
###Output
_____no_output_____
###Markdown
Which we then delete.
###Code
db.sql("delete from inventory where year < 1900")
###Output
_____no_output_____
###Markdown
Fuzzing SQLTo verify that everything is OK, let us fuzz. First we define our grammar. Excursion: Defining a SQL grammar
###Code
import string
from Grammars import START_SYMBOL, Grammar, Expansion, \
is_valid_grammar, extend_grammar
EXPR_GRAMMAR: Grammar = {
"<start>": ["<expr>"],
"<expr>": ["<bexpr>", "<aexpr>", "(<expr>)", "<term>"],
"<bexpr>": [
"<aexpr><lt><aexpr>",
"<aexpr><gt><aexpr>",
"<expr>==<expr>",
"<expr>!=<expr>",
],
"<aexpr>": [
"<aexpr>+<aexpr>", "<aexpr>-<aexpr>", "<aexpr>*<aexpr>",
"<aexpr>/<aexpr>", "<word>(<exprs>)", "<expr>"
],
"<exprs>": ["<expr>,<exprs>", "<expr>"],
"<lt>": ["<"],
"<gt>": [">"],
"<term>": ["<number>", "<word>"],
"<number>": ["<integer>.<integer>", "<integer>", "-<number>"],
"<integer>": ["<digit><integer>", "<digit>"],
"<word>": ["<word><letter>", "<word><digit>", "<letter>"],
"<digit>":
list(string.digits),
"<letter>":
list(string.ascii_letters + '_:.')
}
assert is_valid_grammar(EXPR_GRAMMAR)
PRINTABLE_CHARS: List[str] = [i for i in string.printable
if i not in "<>'\"\t\n\r\x0b\x0c\x00"] + ['<lt>', '<gt>']
INVENTORY_GRAMMAR = extend_grammar(EXPR_GRAMMAR,
{
'<start>': ['<query>'],
'<query>': [
'select <exprs> from <table>',
'select <exprs> from <table> where <bexpr>',
'insert into <table> (<names>) values (<literals>)',
'update <table> set <assignments> where <bexpr>',
'delete from <table> where <bexpr>',
],
'<table>': ['<word>'],
'<names>': ['<column>,<names>', '<column>'],
'<column>': ['<word>'],
'<literals>': ['<literal>', '<literal>,<literals>'],
'<literal>': ['<number>', "'<chars>'"],
'<assignments>': ['<kvp>,<assignments>', '<kvp>'],
'<kvp>': ['<column>=<value>'],
'<value>': ['<word>'],
'<chars>': ['<char>', '<char><chars>'],
'<char>': PRINTABLE_CHARS, # type: ignore
})
assert is_valid_grammar(INVENTORY_GRAMMAR)
###Output
_____no_output_____
###Markdown
As can be seen from the source of our database, the functions always check whether the table name is correct. Hence, we modify the grammar to choose our particular table so that it will have a better chance of reaching deeper. We will see in the later sections how this can be done automatically.
###Code
INVENTORY_GRAMMAR_F = extend_grammar(INVENTORY_GRAMMAR,
{'<table>': ['inventory']})
###Output
_____no_output_____
###Markdown
End of Excursion
###Code
from GrammarFuzzer import GrammarFuzzer
gf = GrammarFuzzer(INVENTORY_GRAMMAR_F)
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = db.sql(query)
print(repr(res))
except SQLException as e:
print("> ", e)
pass
except:
traceback.print_exc()
break
print()
###Output
'select O6fo,-977091.1,-36.46 from inventory'
> Invalid WHERE ('(O6fo,-977091.1,-36.46)')
'select g3 from inventory where -3.0!=V/g/b+Q*M*G'
> Invalid WHERE ('(-3.0!=V/g/b+Q*M*G)')
'update inventory set z=a,x=F_,Q=K where p(M)<_*S'
> Column ('z') was not found
'update inventory set R=L5pk where e*l*y-u>K+U(:)'
> Column ('R') was not found
'select _/d*Q+H/d(k)<t+M-A+P from inventory'
> Invalid WHERE ('(_/d*Q+H/d(k)<t+M-A+P)')
'select F5 from inventory'
> Invalid WHERE ('(F5)')
'update inventory set jWh.=a6 where wcY(M)>IB7(i)'
> Column ('jWh.') was not found
'update inventory set U=y where L(W<c,(U!=W))<V(((q)==m<F),O,l)'
> Column ('U') was not found
'delete from inventory where M/b-O*h*E<H-W>e(Y)-P'
> Invalid WHERE ('M/b-O*h*E<H-W>e(Y)-P')
'select ((kP(86)+b*S+J/Z/U+i(U))) from inventory'
> Invalid WHERE ('(((kP(86)+b*S+J/Z/U+i(U))))')
###Markdown
Fuzzing does not seem to have triggered any crashes. However, are crashes the only errors that we should be worried about? The Evil of EvalIn our database implementation – notably in the `expression_clause()` method -, we have made use of `eval()` to evaluate expressions using the Python interpreter. This allows us to unleash the full power of Python expressions within our SQL statements.
###Code
db.sql('select year from inventory where year < 2000')
###Output
_____no_output_____
###Markdown
In the above query, the clause `year < 2000` is evaluated using `expression_clause()` using Python in the context of each row; hence, `year < 2000` evaluates to either `True` or `False`. The same holds for the expressions being `select`ed:
###Code
db.sql('select year - 1900 if year < 2000 else year - 2000 from inventory')
###Output
_____no_output_____
###Markdown
This works because `year - 1900 if year < 2000 else year - 2000` is a valid Python expression. (It is not a valid SQL expression, though.) The problem with the above is that there is _no limitation_ to what the Python expression can do. What if the user tries the following?
###Code
db.sql('select __import__("os").popen("pwd").read() from inventory')
###Output
_____no_output_____
###Markdown
The above statement effectively reads from the users' file system. Instead of `os.popen("pwd").read()`, it could execute arbitrary Python commands – to access data, install software, run a background process. This is where "the full power of Python expressions" turns back on us. What we want is to allow our _program_ to make full use of its power; yet, the _user_ (or any third party) should not be entrusted to do the same. Hence, we need to differentiate between (trusted) _input from the program_ and (untrusted) _input from the user_. One method that allows such differentiation is that of *dynamic taint analysis*. The idea is to identify the functions that accept user input as *sources* that *taint* any string that comes in through them, and those functions that perform dangerous operations as *sinks*. Finally we bless certain functions as *taint sanitizers*. The idea is that an input from the source should never reach the sink without undergoing sanitization first. This allows us to use a stronger oracle than simply checking for crashes. Tracking String TaintsThere are various levels of taint tracking that one can perform. The simplest is to track that a string fragment originated in a specific environment, and has not undergone a taint removal process. For this, we simply need to wrap the original string with an environment identifier (the _taint_) with `tstr`, and produce `tstr` instances on each operation that results in another string fragment. The attribute `taint` holds a label identifying the environment this instance was derived. A Class for Tainted StringsFor capturing information flows we need a new string class. The idea is to use the new tainted string class `tstr` as a wrapper on the original `str` class. However, `str` is an *immutable* class. Hence, it does not call its `__init__()` method after being constructed. This means that any subclasses of `str` also will not get the `__init__()` method called. If we want to get our initialization routine called, we need to [hook into `__new__()`](https://docs.python.org/3/reference/datamodel.htmlbasic-customization) and return an instance of our own class. We combine this with our initialization code in `__init__()`.
###Code
class tstr(str):
"""Wrapper for strings, saving taint information"""
def __new__(cls, value, *args, **kw):
"""Create a tstr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `tstr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings."""
self.taint: Any = taint
class tstr(tstr):
def __repr__(self) -> tstr:
"""Return a representation."""
return tstr(str.__repr__(self), taint=self.taint)
class tstr(tstr):
def __str__(self) -> str:
"""Convert to string"""
return str.__str__(self)
###Output
_____no_output_____
###Markdown
For example, if we wrap `"hello"` in `tstr`, then we should be able to access its taint:
###Code
thello: tstr = tstr('hello', taint='LOW')
thello.taint
repr(thello).taint # type: ignore
###Output
_____no_output_____
###Markdown
By default, when we wrap a string, it is tainted. Hence we also need a way to clear the taint in the string. One way is to simply return a `str` instance as above. However, one may sometimes wish to remove the taint from an existing instance. This is accomplished with `clear_taint()`. During `clear_taint()`, we simply set the taint to `None`. This method comes with a pair method `has_taint()` which checks whether a `tstr` instance is currently origined.
###Code
class tstr(tstr):
def clear_taint(self):
"""Remove taint"""
self.taint = None
return self
def has_taint(self):
"""Check if taint is present"""
return self.taint is not None
###Output
_____no_output_____
###Markdown
String OperatorsTo propagate the taint, we have to extend string functions, such as operators. We can do so in one single big step, overloading all string methods and operators. When we create a new string from an existing tainted string, we propagate its taint.
###Code
class tstr(tstr):
def create(self, s):
return tstr(s, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_str_wrapper()` function creates a wrapper around an existing string method which attaches the taint to the result of the method:
###Code
class tstr(tstr):
@staticmethod
def make_str_wrapper(fun):
"""Make `fun` (a `str` method) a method in `tstr`"""
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
return self.create(res)
if hasattr(fun, '__doc__'):
# Copy docstring
proxy.__doc__ = fun.__doc__
return proxy
###Output
_____no_output_____
###Markdown
We do this for all string methods that return a string:
###Code
def informationflow_init_1():
for name in ['__format__', '__mod__', '__rmod__', '__getitem__',
'__add__', '__mul__', '__rmul__',
'capitalize', 'casefold', 'center', 'encode',
'expandtabs', 'format', 'format_map', 'join',
'ljust', 'lower', 'lstrip', 'replace',
'rjust', 'rstrip', 'strip', 'swapcase', 'title', 'translate', 'upper']:
fun = getattr(str, name)
setattr(tstr, name, tstr.make_str_wrapper(fun))
informationflow_init_1()
INITIALIZER_LIST = [informationflow_init_1]
def initialize():
for fn in INITIALIZER_LIST:
fn()
###Output
_____no_output_____
###Markdown
The one missing operator is `+` with a regular string on the left side and a tainted string on the right side. Python supports a `__radd__()` method which is invoked if the associated object is used on the right side of an addition.
###Code
class tstr(tstr):
def __radd__(self, value):
"""Return value + self, as a `tstr` object"""
return self.create(value + str(self))
###Output
_____no_output_____
###Markdown
With this, we are already done. Let us create a string `thello` with a taint `LOW`.
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any substring will also be tainted:
###Code
thello[0].taint # type: ignore
thello[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
String additions will return a `tstr` object with the taint:
###Code
(tstr('foo', taint='HIGH') + 'bar').taint # type: ignore
###Output
_____no_output_____
###Markdown
Our `__radd__()` method ensures this also works if the `tstr` occurs on the right side of a string addition:
###Code
('foo' + tstr('bar', taint='HIGH')).taint # type: ignore
thello += ', world' # type: ignore
thello.taint # type: ignore
###Output
_____no_output_____
###Markdown
Other operators such as multiplication also work:
###Code
(thello * 5).taint # type: ignore
('hw %s' % thello).taint # type: ignore
(tstr('hello %s', taint='HIGH') % 'world').taint # type: ignore
###Output
_____no_output_____
###Markdown
Tracking Untrusted InputSo, what can one do with tainted strings? We reconsider the `DB` example. We define a "better" `TrustedDB` which only accepts strings tainted as `"TRUSTED"`.
###Code
class TrustedDB(DB):
def sql(self, s):
assert isinstance(s, tstr), "Need a tainted string"
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
return super().sql(s)
###Output
_____no_output_____
###Markdown
Feeding a string with an "unknown" (i.e., non-existing) trust level will cause `TrustedDB` to fail:
###Code
bdb = TrustedDB(db.db)
from ExpectError import ExpectError
with ExpectError():
bdb.sql("select year from INVENTORY")
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14685/3935989889.py", line 2, in <module>
bdb.sql("select year from INVENTORY")
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14685/995123203.py", line 3, in sql
assert isinstance(s, tstr), "Need a tainted string"
AssertionError: Need a tainted string (expected)
###Markdown
Additionally any user input would be originally tagged with `"UNTRUSTED"` as taint. If we place an untrusted string into our better calculator, it will also fail:
###Code
bad_user_input = tstr('__import__("os").popen("ls").read()', taint='UNTRUSTED')
with ExpectError():
bdb.sql(bad_user_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14685/3307042773.py", line 3, in <module>
bdb.sql(bad_user_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14685/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
Hence, somewhere along the computation, we have to turn the "untrusted" inputs into "trusted" strings. This process is called *sanitization*. A simple sanitization function for our purposes could ensure that the input consists only of few allowed characters (not including letters or quotes); if this is the case, then the input gets a new `"TRUSTED"` taint. If not, we turn the string into an (untrusted) empty string; other alternatives would be to raise an error or to escape or delete "untrusted" characters.
###Code
import re
def sanitize(user_input):
assert isinstance(user_input, tstr)
if re.match(
r'^select +[-a-zA-Z0-9_, ()]+ from +[-a-zA-Z0-9_, ()]+$', user_input):
return tstr(user_input, taint='TRUSTED')
else:
return tstr('', taint='UNTRUSTED')
good_user_input = tstr("select year,model from inventory", taint='UNTRUSTED')
sanitized_input = sanitize(good_user_input)
sanitized_input
sanitized_input.taint
bdb.sql(sanitized_input)
###Output
_____no_output_____
###Markdown
Let us now try out our untrusted input:
###Code
sanitized_input = sanitize(bad_user_input)
sanitized_input
sanitized_input.taint
with ExpectError():
bdb.sql(sanitized_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14685/249000876.py", line 2, in <module>
bdb.sql(sanitized_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14685/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
In a similar fashion, we can prevent SQL and code injections discussed in [the chapter on Web fuzzing](WebFuzzer.ipynb). Taint Aware FuzzingWe can also use tainting to _direct fuzzing to those grammar rules that are likely to generate dangerous inputs._ The idea here is to identify inputs generated by our fuzzer that lead to untrusted execution. First we define the exception to be thrown when a tainted value reaches a dangerous operation.
###Code
class Tainted(Exception):
def __init__(self, v):
self.v = v
def __str__(self):
return 'Tainted[%s]' % self.v
###Output
_____no_output_____
###Markdown
TaintedDBNext, since `my_eval()` is the most dangerous operation in the `DB` class, we define a new class `TaintedDB` that overrides the `my_eval()` to throw an exception whenever an untrusted string reaches this part.
###Code
class TaintedDB(DB):
def my_eval(self, statement, g, l):
if statement.taint != 'TRUSTED':
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
We initialize an instance of `TaintedDB`
###Code
tdb = TaintedDB()
tdb.db = db.db
###Output
_____no_output_____
###Markdown
Then we start fuzzing.
###Code
import traceback
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = tdb.sql(tstr(query, taint='UNTRUSTED'))
print(repr(res))
except SQLException as e:
pass
except Tainted as e:
print("> ", e)
except:
traceback.print_exc()
break
print()
###Output
'delete from inventory where y/u-l+f/y<Y(c)/A-H*q'
> Tainted[y/u-l+f/y<Y(c)/A-H*q]
"insert into inventory (G,Wmp,sl3hku3) values ('<','?')"
"insert into inventory (d0) values (',_G')"
'select P*Q-w/x from inventory where X<j==:==j*r-f'
> Tainted[(X<j==:==j*r-f)]
'select a>F*i from inventory where Q/I-_+P*j>.'
> Tainted[(Q/I-_+P*j>.)]
'select (V-i<T/g) from inventory where T/r/G<FK(m)/(i)'
> Tainted[(T/r/G<FK(m)/(i))]
'select (((i))),_(S,_)/L-k<H(Sv,R,n,W,Y) from inventory'
> Tainted[((((i))),_(S,_)/L-k<H(Sv,R,n,W,Y))]
'select (N==c*U/P/y),i-e/n*y,T!=w,u from inventory'
> Tainted[((N==c*U/P/y),i-e/n*y,T!=w,u)]
'update inventory set _=B,n=v where o-p*k-J>T'
'select s from inventory where w4g4<.m(_)/_>t'
> Tainted[(w4g4<.m(_)/_>t)]
###Markdown
One can see that `insert`, `update`, `select` and `delete` statements on an existing table lead to taint exceptions. We can now focus on these specific kinds of inputs. However, this is not the only thing we can do. We will see how we can identify specific portions of input that reached tainted execution using character origins in the later sections. But before that, we explore other uses of taints. Preventing Privacy LeaksUsing taints, we can also ensure that secret information does not leak out. We can assign a special taint `"SECRET"` to strings whose information must not leak out:
###Code
secrets = tstr('<Plenty of secret keys>', taint='SECRET')
###Output
_____no_output_____
###Markdown
Accessing any substring of `secrets` will propagate the taint:
###Code
secrets[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
Consider the _heartbeat_ security leak from [the chapter on Fuzzing](Fuzzer.ipynb), in which a server would accidentally reply not only the user input sent to it, but also secret memory. If the reply consists only of the user input, there is no taint associated with it:
###Code
user_input = "hello"
reply = user_input
isinstance(reply, tstr)
###Output
_____no_output_____
###Markdown
If, however, the reply contains _any_ part of the secret, the reply will be tainted:
###Code
reply = user_input + secrets[0:5]
reply
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
The output function of our server would now ensure that the data sent back does not contain any secret information:
###Code
def send_back(s):
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
...
with ExpectError():
send_back(reply)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14685/3747050841.py", line 2, in <module>
send_back(reply)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14685/3158733057.py", line 2, in send_back
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
AssertionError (expected)
###Markdown
Our `tstr` solution can help to identify information leaks – but it is by no means complete. If we actually take the `heartbeat()` implementation from [the chapter on Fuzzing](Fuzzer.ipynb), we will see that _any_ reply is marked as `SECRET` – even those not even accessing secret memory:
###Code
from Fuzzer import heartbeat
reply = heartbeat('hello', 5, memory=secrets)
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
Why is this? If we look into the implementation of `heartbeat()`, we will see that it first builds a long string `memory` from the (non-secret) reply and the (secret) memory, before returning the first characters from `memory`.```python Store reply in memory memory = reply + memory[len(reply):]```At this point, the whole memory still is tainted as `SECRET`, _including_ the non-secret part from `reply`. We may be able to circumvent the issue by tagging the `reply` as `PUBLIC` – but then, this taint would be in conflict with the `SECRET` tag of `memory`. What happens if we compose a string from two differently tainted strings?
###Code
thilo = tstr("High", taint='HIGH') + tstr("Low", taint='LOW')
###Output
_____no_output_____
###Markdown
It turns out that in this case, the `__add__()` method takes precedence over the `__radd__()` method, which means that the right-hand `"Low"` string is treated as a regular (non-tainted) string.
###Code
thilo
thilo.taint # type: ignore
###Output
_____no_output_____
###Markdown
We could set up the `__add__()` and other methods with special handling for conflicting taints. However, the way this conflict should be resolved would be highly _application-dependent_:* If we use taints to indicate _privacy levels_, `SECRET` privacy should take precedence over `PUBLIC` privacy. Any combination of a `SECRET`-tainted string and a `PUBLIC`-tainted string thus should have a `SECRET` taint.* If we use taints to indicate _origins_ of information, an `UNTRUSTED` origin should take precedence over a `TRUSTED` origin. Any combination of an `UNTRUSTED`-tainted string and a `TRUSTED`-tainted string thus should have an `UNTRUSTED` taint.Of course, such conflict resolutions can be implemented. But even so, they will not help us in the `heartbeat()` example differentiating secret from non-secret output data. Tracking Individual CharactersFortunately, there is a better, more generic way to solve the above problems. The key to composition of differently tainted strings is to assign taints not only to strings, but actually to every bit of information – in our case, characters. If every character has a taint on its own, a new composition of characters will simply inherit this very taint _per character_. To this end, we introduce a second bit of information named _origin_. Distinguishing various untrusted sources may be accomplished by origining each instance as separate instance (called *colors* in dynamic origin research). You will see an instance of this technique in the chapter on [Grammar Mining](GrammarMiner.ipynb). In this section, we carry *character level* origins. That is, given a fragment that resulted from a portion of the original origined string, one will be able to tell which portion of the input string the fragment was taken from. In essence, each input character index from an origined source gets its own color. More complex origining such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter. A Class for Tracking Character OriginsLet us introduce a class `ostr` which, like `tstr`, carries a taint for each string, and additionally an _origin_ for each character that indicates its source. It is a consecutive number in a particular range (by default, starting with zero) indicating its _position_ within a specific origin.
###Code
class ostr(str):
"""Wrapper for strings, saving taint and origin information"""
DEFAULT_ORIGIN = 0
def __new__(cls, value, *args, **kw):
"""Create an ostr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None,
origin: Optional[Union[int, List[int]]] = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `ostr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings.
`origin` (optional) is either
- an integer denoting the index of the first character in `value`, or
- a list of integers denoting the origins of the characters in `value`,
"""
self.taint = taint
if origin is None:
origin = ostr.DEFAULT_ORIGIN
if isinstance(origin, int):
self.origin = list(range(origin, origin + len(self)))
else:
self.origin = origin
assert len(self.origin) == len(self)
###Output
_____no_output_____
###Markdown
As with `tstr`, above, we implement methods for conversion into (regular) Python strings:
###Code
class ostr(ostr):
def create(self, s):
return ostr(s, taint=self.taint, origin=self.origin)
class ostr(ostr):
UNKNOWN_ORIGIN = -1
def __repr__(self):
# handle escaped chars
origin = [ostr.UNKNOWN_ORIGIN]
for s, o in zip(str(self), self.origin):
origin.extend([o] * (len(repr(s)) - 2))
origin.append(ostr.UNKNOWN_ORIGIN)
return ostr(str.__repr__(self), taint=self.taint, origin=origin)
class ostr(ostr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
By default, character origins start with `0`:
###Code
othello = ostr('hello')
assert othello.origin == [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
We can also specify the starting origin as below -- `6..10`
###Code
tworld = ostr('world', origin=6)
assert tworld.origin == [6, 7, 8, 9, 10]
a = ostr("hello\tworld")
repr(a).origin # type: ignore
###Output
_____no_output_____
###Markdown
`str()` returns a `str` instance without origin or taint information:
###Code
assert type(str(othello)) == str
###Output
_____no_output_____
###Markdown
`repr()`, however, keeps the origin information for the original string:
###Code
repr(othello)
repr(othello).origin # type: ignore
###Output
_____no_output_____
###Markdown
Just as with taints, we can clear origins and check whether an origin is present:
###Code
class ostr(ostr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
class ostr(ostr):
def clear_origin(self):
self.origin = [self.UNKNOWN_ORIGIN] * len(self)
return self
def has_origin(self):
return any(origin != self.UNKNOWN_ORIGIN for origin in self.origin)
othello = ostr('Hello')
assert othello.has_origin()
othello.clear_origin()
assert not othello.has_origin()
###Output
_____no_output_____
###Markdown
In the remainder of this section, we re-implement various string methods such that they also keep track of origins. If this is too tedious for you, jump right [to the next section](Checking-Origins) which gives a number of usage examples. Excursion: Implementing String Methods CreateWe need to create new substrings that are wrapped in `ostr` objects. However, we also want to allow our subclasses to create their own instances. Hence we again provide a `create()` method that produces a new `ostr` instance.
###Code
class ostr(ostr):
def create(self, res, origin=None):
return ostr(res, taint=self.taint, origin=origin)
othello = ostr('hello', taint='HIGH')
otworld = othello.create('world', origin=6)
otworld.origin
otworld.taint
assert (othello.origin, otworld.origin) == (
[0, 1, 2, 3, 4], [6, 7, 8, 9, 10])
###Output
_____no_output_____
###Markdown
IndexIn Python, indexing is provided through `__getitem__()`. Indexing on positive integers is simple enough. However, it has two additional wrinkles. The first is that, if the index is negative, that many characters are counted from the end of the string which lies just after the last character. That is, the last character has a negative index `-1`
###Code
class ostr(ostr):
def __getitem__(self, key):
res = super().__getitem__(key)
if isinstance(key, int):
key = len(self) + key if key < 0 else key
return self.create(res, [self.origin[key]])
elif isinstance(key, slice):
return self.create(res, self.origin[key])
else:
assert False
ohello = ostr('hello', taint='HIGH')
assert (ohello[0], ohello[-1]) == ('h', 'o')
ohello[0].taint
###Output
_____no_output_____
###Markdown
The other wrinkle is that `__getitem__()` can accept a slice. We discuss this next. SlicesThe Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method, which returns a custom `iterator`.
###Code
class ostr(ostr):
def __iter__(self):
return ostr_iterator(self)
###Output
_____no_output_____
###Markdown
The `__iter__()` method requires a supporting `iterator` object. The `iterator` is used to save the state of the current iteration, which it does by keeping a reference to the original `ostr`, and the current index of iteration `_str_idx`.
###Code
class ostr_iterator():
def __init__(self, ostr):
self._ostr = ostr
self._str_idx = 0
def __next__(self):
if self._str_idx == len(self._ostr):
raise StopIteration
# calls ostr getitem should be ostr
c = self._ostr[self._str_idx]
assert isinstance(c, ostr)
self._str_idx += 1
return c
###Output
_____no_output_____
###Markdown
Bringing all these together:
###Code
thw = ostr('hello world', taint='HIGH')
thw[0:5]
assert thw[0:5].has_taint()
assert thw[0:5].has_origin()
thw[0:5].taint
thw[0:5].origin
###Output
_____no_output_____
###Markdown
Splits
###Code
def make_split_wrapper(fun):
def proxy(self, *args, **kwargs):
lst = fun(self, *args, **kwargs)
return [self.create(elem) for elem in lst]
return proxy
for name in ['split', 'rsplit', 'splitlines']:
fun = getattr(str, name)
setattr(ostr, name, make_split_wrapper(fun))
othello = ostr('hello world', taint='LOW')
othello == 'hello world'
othello.split()[0].taint # type: ignore
###Output
_____no_output_____
###Markdown
(Exercise for the reader: handle _partitions_, i.e., splitting a string by substrings) ConcatenationIf two origined strings are concatenated together, it may be desirable to transfer the origins from each to the corresponding portion of the resulting string. The concatenation of strings is accomplished by overriding `__add__()`.
###Code
class ostr(ostr):
def __add__(self, other):
if isinstance(other, ostr):
return self.create(str.__add__(self, other),
(self.origin + other.origin))
else:
return self.create(str.__add__(self, other),
(self.origin + [self.UNKNOWN_ORIGIN for i in other]))
###Output
_____no_output_____
###Markdown
Testing concatenations between two `ostr` instances:
###Code
othello = ostr("hello")
otworld = ostr("world", origin=6)
othw = othello + otworld
assert othw.origin == [0, 1, 2, 3, 4, 6, 7, 8, 9, 10] # type: ignore
###Output
_____no_output_____
###Markdown
What if a `ostr` is concatenated with a `str`?
###Code
space = " "
th_w = othello + space + otworld
assert th_w.origin == [
0,
1,
2,
3,
4,
ostr.UNKNOWN_ORIGIN,
ostr.UNKNOWN_ORIGIN,
6,
7,
8,
9,
10]
###Output
_____no_output_____
###Markdown
One wrinkle here is that when adding a `ostr` and a `str`, the user may place the `str` first, in which case, the `__add__()` method will be called on the `str` instance. Not on the `ostr` instance. However, Python provides a solution. If one defines `__radd__()` on the `ostr` instance, that method will be called rather than `str.__add__()`
###Code
class ostr(ostr):
def __radd__(self, other):
origin = other.origin if isinstance(other, ostr) else [
self.UNKNOWN_ORIGIN for i in other]
return self.create(str.__add__(other, self), (origin + self.origin))
###Output
_____no_output_____
###Markdown
We test it out:
###Code
shello = "hello"
otworld = ostr("world")
thw = shello + otworld
assert thw.origin == [ostr.UNKNOWN_ORIGIN] * len(shello) + [0, 1, 2, 3, 4] # type: ignore
###Output
_____no_output_____
###Markdown
These methods: `slicing` and `concatenation` is sufficient to implement other string methods that result in a string, and does not change the character underneath (i.e no case change). Hence, we look at a helper method next. Extract Origin StringGiven a specific input index, the method `x()` extracts the corresponding origined portion from a `ostr`. As a convenience it supports `slices` along with `ints`.
###Code
class ostr(ostr):
class TaintException(Exception):
pass
def x(self, i=0):
"""Extract substring at index/slice `i`"""
if not self.origin:
raise origin.TaintException('Invalid request idx')
if isinstance(i, int):
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j == i]]
elif isinstance(i, slice):
r = range(i.start or 0, i.stop or len(self), i.step or 1)
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j in r]]
thw = ostr('hello world', origin=100)
assert thw.x(101) == ['e']
assert thw.x(slice(101, 105)) == ['e', 'l', 'l', 'o']
###Output
_____no_output_____
###Markdown
Replace The `replace()` method replaces a portion of the string with another.
###Code
class ostr(ostr):
def replace(self, a, b, n=None):
old_origin = self.origin
b_origin = b.origin if isinstance(
b, ostr) else [self.UNKNOWN_ORIGIN] * len(b)
mystr = str(self)
i = 0
while True:
if n and i >= n:
break
idx = mystr.find(a)
if idx == -1:
break
last = idx + len(a)
mystr = mystr.replace(a, b, 1)
partA, partB = old_origin[0:idx], old_origin[last:]
old_origin = partA + b_origin + partB
i += 1
return self.create(mystr, old_origin)
my_str = ostr("aa cde aa")
res = my_str.replace('aa', 'bb')
assert res, res.origin == ('bb', 'cde', 'bb',
[ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN,
2, 3, 4, 5, 6,
ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN])
my_str = ostr("aa cde aa")
res = my_str.replace('aa', ostr('bb', origin=100))
assert (
res, res.origin) == (
('bb cde bb'), [
100, 101, 2, 3, 4, 5, 6, 100, 101])
###Output
_____no_output_____
###Markdown
Split We essentially have to re-implement split operations, and split by space is slightly different from other splits.
###Code
class ostr(ostr):
def _split_helper(self, sep, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = len(sep)
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
first_idx = last_idx + sep_len
return result_list
def _split_space(self, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = 0
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
v = str(self[last_idx:])
sep_len = len(v) - len(v.lstrip(' '))
first_idx = last_idx + sep_len
return result_list
def rsplit(self, sep=None, maxsplit=-1):
splitted = super().rsplit(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
def split(self, sep=None, maxsplit=-1):
splitted = super().split(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
my_str = ostr('ab cdef ghij kl')
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([0, 1], [3, 4, 5, 6], [8, 9, 10, 11], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 15)))
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert(ab.origin, cdef.origin, kl.origin) == ([0, 1], [3, 4, 5, 6], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=100, taint='HIGH')
ab, cdef, ghij, kl = my_str.rsplit()
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([100, 101], [105, 106, 107, 108], [110, 111, 112, 113],
[118, 119])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 20)), taint='HIGH')
ab, cdef, ghij, kl = my_str.split()
assert (ab.origin, cdef.origin, kl.origin) == ([0, 1], [5, 6, 7, 8], [18, 19])
assert ab.taint == 'HIGH'
###Output
_____no_output_____
###Markdown
Strip
###Code
class ostr(ostr):
def strip(self, cl=None):
return self.lstrip(cl).rstrip(cl)
def lstrip(self, cl=None):
res = super().lstrip(cl)
i = self.find(res)
return self[i:]
def rstrip(self, cl=None):
res = super().rstrip(cl)
return self[0:len(res)]
my_str1 = ostr(" abc ")
v = my_str1.strip()
assert v, v.origin == ('abc', [2, 3, 4])
my_str1 = ostr(" abc ")
v = my_str1.lstrip()
assert (v, v.origin) == ('abc ', [2, 3, 4, 5, 6])
my_str1 = ostr(" abc ")
v = my_str1.rstrip()
assert (v, v.origin) == (' abc', [0, 1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
Expand Tabs
###Code
class ostr(ostr):
def expandtabs(self, n=8):
parts = self.split('\t')
res = super().expandtabs(n)
all_parts = []
for i, p in enumerate(parts):
all_parts.extend(p.origin)
if i < len(parts) - 1:
l = len(all_parts) % n
all_parts.extend([p.origin[-1]] * l)
return self.create(res, all_parts)
my_s = str("ab\tcd")
my_ostr = ostr("ab\tcd")
v1 = my_s.expandtabs(4)
v2 = my_ostr.expandtabs(4)
assert str(v1) == str(v2)
assert (len(v1), repr(v2), v2.origin) == (6, "'ab cd'", [0, 1, 1, 1, 3, 4])
class ostr(ostr):
def join(self, iterable):
mystr = ''
myorigin = []
sep_origin = self.origin
lst = list(iterable)
for i, s in enumerate(lst):
sorigin = s.origin if isinstance(s, ostr) else [
self.UNKNOWN_ORIGIN] * len(s)
myorigin.extend(sorigin)
mystr += str(s)
if i < len(lst) - 1:
myorigin.extend(sep_origin)
mystr += str(self)
res = super().join(iterable)
assert len(res) == len(mystr)
return self.create(res, myorigin)
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr('').join([v2, v3, v1])
assert (
v4, v4.origin) == (
'cdefab', [
103, 104, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 100, 101])
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr(',').join([v2, v3, v1])
assert (v4, v4.origin) == ('cd,ef,ab',
[103, 104, 0, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 0, 100, 101]) # type: ignore
###Output
_____no_output_____
###Markdown
Partitions
###Code
class ostr(ostr):
def partition(self, sep):
partA, sep, partB = super().partition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
def rpartition(self, sep):
partA, sep, partB = super().rpartition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
###Output
_____no_output_____
###Markdown
Justify
###Code
class ostr(ostr):
def ljust(self, width, fillchar=' '):
res = super().ljust(width, fillchar)
initial = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, [t] * initial + self.origin)
class ostr(ostr):
def rjust(self, width, fillchar=' '):
res = super().rjust(width, fillchar)
final = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, self.origin + [t] * final)
###Output
_____no_output_____
###Markdown
mod
###Code
class ostr(ostr):
def __mod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
s_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = self.find('%s')
assert i >= 0
res = super().__mod__(s)
r_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
class ostr(ostr):
def __rmod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
r_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = s.find('%s')
assert i >= 0
res = super().__rmod__(s)
s_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
a = ostr('hello %s world', origin=100)
a
(a % 'good').origin
b = 'hello %s world'
c = ostr('bad', origin=10)
(b % c).origin
###Output
_____no_output_____
###Markdown
String methods that do not change origin
###Code
class ostr(ostr):
def swapcase(self):
return self.create(str(self).swapcase(), self.origin)
def upper(self):
return self.create(str(self).upper(), self.origin)
def lower(self):
return self.create(str(self).lower(), self.origin)
def capitalize(self):
return self.create(str(self).capitalize(), self.origin)
def title(self):
return self.create(str(self).title(), self.origin)
a = ostr('aa', origin=100).upper()
a, a.origin
###Output
_____no_output_____
###Markdown
General wrappers These are not strictly needed for operation, but can be useful for tracing.
###Code
def make_basic_str_wrapper(fun): # type: ignore
def proxy(*args, **kwargs):
res = fun(*args, **kwargs)
return res
return proxy
import inspect
import types
def informationflow_init_2():
ostr_members = [name for name, fn in inspect.getmembers(ostr, callable)
if isinstance(fn, types.FunctionType) and fn.__qualname__.startswith('ostr')]
for name, fn in inspect.getmembers(str, callable):
if name not in set(['__class__', '__new__', '__str__', '__init__',
'__repr__', '__getattribute__']) | set(ostr_members):
setattr(ostr, name, make_basic_str_wrapper(fn))
informationflow_init_2()
INITIALIZER_LIST.append(informationflow_init_2)
###Output
_____no_output_____
###Markdown
Methods yet to be translated These methods generate strings from other strings. However, we do not have the right implementations for any of these. Hence these are marked as dangerous until we can generate the right translations.
###Code
def make_str_abort_wrapper(fun):
def proxy(*args, **kwargs):
raise ostr.TaintException(
'%s Not implemented in `ostr`' %
fun.__name__)
return proxy
def informationflow_init_3():
for name, fn in inspect.getmembers(str, callable):
# Omitted 'splitlines' as this is needed for formatting output in
# IPython/Jupyter
if name in ['__format__', 'format_map', 'format',
'__mul__', '__rmul__', 'center', 'zfill', 'decode', 'encode']:
setattr(ostr, name, make_str_abort_wrapper(fn))
informationflow_init_3()
INITIALIZER_LIST.append(informationflow_init_3)
###Output
_____no_output_____
###Markdown
While generating proxy wrappers for string operations can handle most common cases of transmission of information flow, some of the operations involving strings can not be overridden. For example, consider the following. End of Excursion Checking OriginsWith all this implemented, we now have full-fledged `ostr` strings where we can easily check the origin of each and every character. To check whether a string originates from another string, we can convert the origin to a set and resort to standard set operations:
###Code
s = ostr("hello", origin=100)
s[1]
s[1].origin
set(s[1].origin) <= set(s.origin)
t = ostr("world", origin=200)
set(s.origin) <= set(t.origin)
u = s + t + "!"
u.origin
ostr.UNKNOWN_ORIGIN in u.origin
###Output
_____no_output_____
###Markdown
Privacy Leaks RevisitedLet us apply it to see whether we can come up with a satisfactory solution for checking the `heartbeat()` function against information leakage.
###Code
SECRET_ORIGIN = 1000
###Output
_____no_output_____
###Markdown
We define a "secret" that must not leak out:
###Code
secret = ostr('<again, some super-secret input>', origin=SECRET_ORIGIN)
###Output
_____no_output_____
###Markdown
Each and every character in `secret` has an origin starting with `SECRET_ORIGIN`:
###Code
print(secret.origin)
###Output
[1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031]
###Markdown
If we now invoke `heartbeat()` with a given string, the origin of the reply should all be `UNKNOWN_ORIGIN` (from the input), and none of the characters should have a `SECRET_ORIGIN`.
###Code
hello_s = heartbeat('hello', 5, memory=secret)
hello_s
assert isinstance(hello_s, ostr)
print(hello_s.origin)
###Output
[-1, -1, -1, -1, -1]
###Markdown
We can verify that the secret did not leak out by formulating appropriate assertions:
###Code
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
_____no_output_____
###Markdown
All assertions pass, again confirming that no secret leaked out. Let us now go and exploit `heartbeat()` to reveal its secrets. As `heartbeat()` is unchanged, it is as vulnerable as it was:
###Code
hello_s = heartbeat('hello', 32, memory=secret)
hello_s
###Output
_____no_output_____
###Markdown
Now, however, the reply _does_ contain secret information:
###Code
assert isinstance(hello_s, ostr)
print(hello_s.origin)
with ExpectError():
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
with ExpectError():
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
with ExpectError():
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14685/1577803914.py", line 2, in <module>
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
AssertionError (expected)
###Markdown
We can now integrate these assertions into the `heartbeat()` function, causing it to fail before leaking information. Additionally (or alternatively?), we can also rewrite our output functions not to give out any secret information. We will leave these two exercises for the reader. Taint-Directed FuzzingThe previous _Taint Aware Fuzzing_ was a bit unsatisfactory in that we could not focus on the specific parts of the grammar that led to dangerous operations. We fix that with _taint directed fuzzing_ using `TrackingDB`.The idea here is to track the origins of each character that reaches `eval`. Then, track it back to the grammar nodes that generated it, and increase the probability of using those nodes again. TrackingDBThe `TrackingDB` is similar to `TaintedDB`. The difference is that, if we find that the execution has reached the `my_eval`, we simply raise the `Tainted`.
###Code
class TrackingDB(TaintedDB):
def my_eval(self, statement, g, l):
if statement.origin:
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
Next, we need a specially crafted fuzzer that preserves the taints. TaintedGrammarFuzzerWe define a `TaintedGrammarFuzzer` class that ensures that the taints propagate to the derivation tree. This is similar to the `GrammarFuzzer` from the [chapter on grammar fuzzers](GrammarFuzzer.ipynb) except that the origins and taints are preserved.
###Code
import random
from GrammarFuzzer import GrammarFuzzer
from Parser import canonical
class TaintedGrammarFuzzer(GrammarFuzzer):
def __init__(self,
grammar,
start_symbol=START_SYMBOL,
expansion_switch=1,
log=False):
self.tainted_start_symbol = ostr(
start_symbol, origin=[1] * len(start_symbol))
self.expansion_switch = expansion_switch
self.log = log
self.grammar = grammar
self.c_grammar = canonical(grammar)
self.init_tainted_grammar()
def expansion_cost(self, expansion, seen=set()):
symbols = [e for e in expansion if e in self.c_grammar]
if len(symbols) == 0:
return 1
if any(s in seen for s in symbols):
return float('inf')
return sum(self.symbol_cost(s, seen) for s in symbols) + 1
def fuzz_tree(self):
tree = (self.tainted_start_symbol, [])
nt_leaves = [tree]
expansion_trials = 0
while nt_leaves:
idx = random.randint(0, len(nt_leaves) - 1)
key, children = nt_leaves[idx]
expansions = self.ct_grammar[key]
if expansion_trials < self.expansion_switch:
expansion = random.choice(expansions)
else:
costs = [self.expansion_cost(e) for e in expansions]
m = min(costs)
all_min = [i for i, c in enumerate(costs) if c == m]
expansion = expansions[random.choice(all_min)]
new_leaves = [(token, []) for token in expansion]
new_nt_leaves = [e for e in new_leaves if e[0] in self.ct_grammar]
children[:] = new_leaves
nt_leaves[idx:idx + 1] = new_nt_leaves
if self.log:
print("%-40s" % (key + " -> " + str(expansion)))
expansion_trials += 1
return tree
def fuzz(self):
self.derivation_tree = self.fuzz_tree()
return self.tree_to_string(self.derivation_tree)
###Output
_____no_output_____
###Markdown
We use a specially prepared tainted grammar for fuzzing. We mark each individual definition, each individual rule, and each individual token with a separate origin (we chose a token boundary of 10 here, after inspecting the grammar). This allows us to track exactly which parts of the grammar were involved in the operations we are interested in.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def init_tainted_grammar(self):
key_increment, alt_increment, token_increment = 1000, 100, 10
key_origin = key_increment
self.ct_grammar = {}
for key, val in self.c_grammar.items():
key_origin += key_increment
os = []
for v in val:
ts = []
key_origin += alt_increment
for t in v:
nt = ostr(t, origin=key_origin)
key_origin += token_increment
ts.append(nt)
os.append(ts)
self.ct_grammar[key] = os
# a use tracking grammar
self.ctp_grammar = {}
for key, val in self.ct_grammar.items():
self.ctp_grammar[key] = [(v, dict(use=0)) for v in val]
###Output
_____no_output_____
###Markdown
As before, we initialize the `TrackingDB`
###Code
trdb = TrackingDB(db.db)
###Output
_____no_output_____
###Markdown
Finally, we need to ensure that the taints are preserved, when the tree is converted back to a string. For this, we define the `tainted_tree_to_string()`
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def tree_to_string(self, tree):
symbol, children, *_ = tree
e = ostr('')
if children:
return e.join([self.tree_to_string(c) for c in children])
else:
return e if symbol in self.c_grammar else symbol
###Output
_____no_output_____
###Markdown
We define `update_grammar()` that accepts a set of origins that reached the dangerous operations and the derivation tree of the original string used for fuzzing to update the enhanced grammar.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def update_grammar(self, origin, dtree):
def update_tree(dtree, origin):
key, children = dtree
if children:
updated_children = [update_tree(c, origin) for c in children]
corigin = set.union(
*[o for (key, children, o) in updated_children])
corigin = corigin.union(set(key.origin))
return (key, children, corigin)
else:
my_origin = set(key.origin).intersection(origin)
return (key, [], my_origin)
key, children, oset = update_tree(dtree, set(origin))
for key, alts in self.ctp_grammar.items():
for alt, o in alts:
alt_origins = set([i for token in alt for i in token.origin])
if alt_origins.intersection(oset):
o['use'] += 1
###Output
_____no_output_____
###Markdown
With these, we are now ready to fuzz.
###Code
def tree_type(tree):
key, children = tree
return (type(key), key, [tree_type(c) for c in children])
tgf = TaintedGrammarFuzzer(INVENTORY_GRAMMAR_F)
x = None
for _ in range(10):
qtree = tgf.fuzz_tree()
query = tgf.tree_to_string(qtree)
assert isinstance(query, ostr)
try:
print(repr(query))
res = trdb.sql(query)
print(repr(res))
except SQLException as e:
print(e)
except Tainted as e:
print(e)
origin = e.args[0].origin
tgf.update_grammar(origin, qtree)
except:
traceback.print_exc()
break
print()
###Output
'select (g!=(9)!=((:)==2==9)!=J)==-7 from inventory'
Tainted[((g!=(9)!=((:)==2==9)!=J)==-7)]
'delete from inventory where ((c)==T)!=5==(8!=Y)!=-5'
Tainted[((c)==T)!=5==(8!=Y)!=-5]
'select (((w==(((X!=------8)))))) from inventory'
Tainted[((((w==(((X!=------8)))))))]
'delete from inventory where ((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))'
Tainted[((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))]
'delete from inventory where ((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))'
Tainted[((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))]
'delete from inventory where o!=2==---5==3!=t'
Tainted[o!=2==---5==3!=t]
'select (2) from inventory'
Tainted[((2))]
'select _ from inventory'
Tainted[(_)]
'select L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)) from inventory'
Tainted[(L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)))]
'delete from inventory where _==(7==(9)!=(---5)==1)==-8'
Tainted[_==(7==(9)!=(---5)==1)==-8]
###Markdown
We can now inspect our enhanced grammar to see how many times each rule was used.
###Code
tgf.ctp_grammar
###Output
_____no_output_____
###Markdown
From here, the idea is to focus on the rules that reached dangerous operations more often, and increase the probability of the values of that kind. The Limits of Taint TrackingWhile our framework can detect information leakage, it is by no means perfect. There are several ways in which taints can get lost and information thus may still leak out. ConversionsWe only track taints and origins through _strings_ and _characters_. If we convert these to numbers (or other data), the information is lost. As an example, consider this function, converting individual characters to numbers and back:
###Code
def strip_all_info(s):
t = ""
for c in s:
t += chr(ord(c))
return t
othello = ostr("Secret")
othello
othello.origin # type: ignore
###Output
_____no_output_____
###Markdown
The taints and origins will not propagate through the number conversion:
###Code
thello_stripped = strip_all_info(thello)
thello_stripped
with ExpectError():
thello_stripped.origin
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14685/588526133.py", line 2, in <module>
thello_stripped.origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
This issue could be addressed by extending numbers with taints and origins, just as we did for strings. At some point, however, this will still break down, because as soon as an internal C function in the Python library is reached, the taint will not propagate into and across the C function. (Unless one starts implementing dynamic taints for these, that is.) Internal C libraries As we mentioned before, calls to _internal_ C libraries do not propagate taints. For example, while the following preserves the taints,
###Code
hello = ostr('hello', origin=100)
world = ostr('world', origin=200)
(hello + ' ' + world).origin
###Output
_____no_output_____
###Markdown
a call to a `join` that should be equivalent will fail.
###Code
with ExpectError():
''.join([hello, ' ', world]).origin # type: ignore
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_14685/2341342688.py", line 2, in <module>
''.join([hello, ' ', world]).origin # type: ignore
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
Implicit Information FlowEven if one could taint all data in a program, there still would be means to break information flow – notably by turning explicit flow into _implicit_ flow, or data flow into _control flow_. Here is an example:
###Code
def strip_all_info_again(s):
t = ""
for c in s:
if c == 'a':
t += 'a'
elif c == 'b':
t += 'b'
elif c == 'c':
t += 'c'
...
###Output
_____no_output_____
###Markdown
With such a function, there is no explicit data flow between the characters in `s` and the characters in `t`; yet, the strings would be identical. This problem frequently occurs in programs that process and manipulate external input. Enforcing TaintingBoth, conversions and implicit information flow are one of several possibilities how taint and origin information get lost. To address the problem, the best solution is to _always assume the worst from untainted strings_:* As it comes to trust, an untainted string should be treated as _possibly untrusted_, and hence not relied upon unless sanitized.* As it comes to privacy, an untainted string should be treated as _possibly secret_, and hence not leaked out.As a consequence, your program should always have two kinds of taints: one for explicitly trusted (or secret) and one for explicitly untrusted (or non-secret). If a taint gets lost along the way, you will may have to restore it from its sources – not unlike the string methods discussed above. The benefit is a trusted application, in which each and every information flow can be checked at runtime, with violations quickly discovered through automated tests. Synopsis This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:
###Code
thello[:4]
###Output
_____no_output_____
###Markdown
However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:
###Code
thello.taint
###Output
_____no_output_____
###Markdown
The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:
###Code
thello[1:2].taint # type: ignore
###Output
_____no_output_____
###Markdown
`tstr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
from ClassDiagram import display_class_hierarchy
display_class_hierarchy(tstr)
###Output
_____no_output_____
###Markdown
Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:
###Code
secret = ostr("joshua1234", origin=100, taint='SECRET')
###Output
_____no_output_____
###Markdown
The `origin` attribute of an `ostr` provides access to a list of indexes:
###Code
secret.origin
secret.taint
###Output
_____no_output_____
###Markdown
`ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:
###Code
secret_substr = (secret[0:4] + "-" + secret[6:])
secret_substr.taint
secret_substr.origin
###Output
_____no_output_____
###Markdown
`ostr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
display_class_hierarchy(ostr)
###Output
_____no_output_____
###Markdown
Lessons Learned* String-based and character-based taints allow to dynamically track the information flow from input to the internals of a system and back to the output.* Checking taints allows to discover untrusted inputs and information leakage at runtime.* Data conversions and implicit data flow may strip taint information; the resulting untainted strings should be treated as having the worst possible taint.* Taints can be used in conjunction with fuzzing to provide a more robust indication of incorrect behavior than to simply rely on program crashes. Next StepsAn even better alternative to our taint-directed fuzzing is to make use of _symbolic_ techniques that take the semantics of the program under test into account. The chapter on [flow fuzzing](FlowFuzzer.ipynb) introduces these symbolic techniques for the purpose of exploring information flows; the subsequent chapter on [symbolic fuzzing](SymbolicFuzzer.ipynb) then shows how to make full-fledged use of symbolic execution for covering code. Similarly, [search based fuzzing](SearchBasedFuzzer.ipynb) can often provide a cheaper exploration strategy. BackgroundTaint analysis on Python using a library approach as we implemented in this chapter was discussed by Conti et al. \cite{Conti2010}. Exercises Exercise 1: Tainted NumbersIntroduce a class `tint` (for tainted integer) that, like `tstr`, has a taint attribute that gets passed on from `tint` to `tint`. Part 1: CreationImplement the `tint` class such that taints are set:```pythonx = tint(42, taint='SECRET')assert x.taint == 'SECRET'``` **Solution.** This is pretty straightforward, as we can apply the same scheme as for `tstr`:
###Code
class tint(int):
def __new__(cls, value, *args, **kw):
return int.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
x = tint(42, taint='SECRET')
assert x.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 2: Arithmetic expressionsEnsure that taints get passed along arithmetic expressions; support addition, subtraction, multiplication, and division operators.```pythony = x + 1assert y.taint == 'SECRET'``` **Solution.** As with `tstr`, we implement a `create()` method and a convenience function to quickly define all arithmetic operations:
###Code
class tint(tint):
def create(self, n):
# print("New tint from", n)
return tint(n, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_int_wrapper()` function creates a wrapper around an existing `int` method which attaches the taint to the result of the method:
###Code
def make_int_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
# print(fun, args, kwargs, "=", repr(res))
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all arithmetic operators:
###Code
for name in ['__add__', '__radd__', '__mul__', '__rmul__', '__sub__',
'__floordiv__', '__truediv__']:
fun = getattr(int, name)
setattr(tint, name, make_int_wrapper(fun))
x = tint(42, taint='SECRET')
y = x + 1
y.taint # type: ignore
###Output
_____no_output_____
###Markdown
Part 3: Passing taints from integers to stringsConverting a tainted integer into a string (using `repr()`) should yield a tainted string:```pythonx_s = repr(x)assert x_s.taint == 'SECRET'``` **Solution.** We define the string conversion functions such that they return a tainted string (`tstr`):
###Code
class tint(tint):
def __repr__(self) -> tstr:
s = int.__repr__(self)
return tstr(s, taint=self.taint)
class tint(tint):
def __str__(self) -> tstr:
return tstr(int.__str__(self), taint=self.taint)
x = tint(42, taint='SECRET')
x_s = repr(x)
assert isinstance(x_s, tstr)
assert x_s.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 4: Passing taints from strings to integersConverting a tainted object (with a `taint` attribute) to an integer should pass that taint:```pythonpassword = tstr('1234', taint='NOT_EXACTLY_SECRET')x = tint(password)assert x == 1234assert x.taint == 'NOT_EXACTLY_SECRET'``` **Solution.** This can be done by having the `__init__()` constructor check for a `taint` attibute:
###Code
class tint(tint):
def __init__(self, value, taint=None, **kwargs):
if taint is not None:
self.taint = taint
else:
self.taint = getattr(value, 'taint', None)
password = tstr('1234', taint='NOT_EXACTLY_SECRET')
x = tint(password)
assert x == 1234
assert x.taint == 'NOT_EXACTLY_SECRET'
###Output
_____no_output_____
###Markdown
Tracking Information FlowWe have explored how one could generate better inputs that can penetrate deeper into the program in question. While doing so, we have relied on program crashes to tell us that we have succeeded in finding problems in the program. However, that is rather simplistic. What if the behavior of the program is simply incorrect, but does not lead to a crash? Can one do better?In this chapter, we explore in depth how to track information flows in Python, and how these flows can be used to determine whether a program behaved as expected.
###Code
from bookutils import YouTubeVideo
YouTubeVideo('MJ0VGzVbhYc')
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [chapter on coverage](Coverage.ipynb).* You should have read the [chapter on probabilistic fuzzing](ProbabilisticGrammarFuzzer.ipynb). We first set up our infrastructure so that we can make use of previously defined functions.
###Code
import bookutils
from typing import List, Any, Optional, Union
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.InformationFlow import ```and then make use of the following features.This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:```python>>> thello = tstr('hello', taint='LOW')```A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:```python>>> thello[:4]'hell'```However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:```python>>> thello.taint'LOW'```The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:```python>>> thello[1:2].taint type: ignore'LOW'````tstr` objects duplicate most `str` methods, as indicated in the class diagram: Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:```python>>> secret = ostr("joshua1234", origin=100, taint='SECRET')```The `origin` attribute of an `ostr` provides access to a list of indexes:```python>>> secret.origin[100, 101, 102, 103, 104, 105, 106, 107, 108, 109]>>> secret.taint'SECRET'````ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:```python>>> secret_substr = (secret[0:4] + "-" + secret[6:])>>> secret_substr.taint'SECRET'>>> secret_substr.origin[100, 101, 102, 103, -1, 106, 107, 108, 109]````ostr` objects duplicate most `str` methods, as indicated in the class diagram: A Vulnerable DatabaseSay we want to implement an *in-memory database* service in Python. Here is a rather flimsy attempt. We use the following dataset.
###Code
INVENTORY = """\
1997,van,Ford,E350
2000,car,Mercury,Cougar
1999,car,Chevy,Venture\
"""
VEHICLES = INVENTORY.split('\n')
###Output
_____no_output_____
###Markdown
Our DB is a Python class that parses its arguments and throws `SQLException` which is defined below.
###Code
class SQLException(Exception):
pass
###Output
_____no_output_____
###Markdown
The database is simply a Python `dict` that is exposed only through SQL queries.
###Code
class DB:
def __init__(self, db={}):
self.db = dict(db)
###Output
_____no_output_____
###Markdown
Representing TablesThe database contains tables, which are created by a method call `create_table()`. Each table data structure is a pair of values. The first one is the meta data containing column names and types. The second value is a list of values in the table.
###Code
class DB(DB):
def create_table(self, table, defs):
self.db[table] = (defs, [])
###Output
_____no_output_____
###Markdown
The table can be retrieved using the name using the `table()` method call.
###Code
class DB(DB):
def table(self, t_name):
if t_name in self.db:
return self.db[t_name]
raise SQLException('Table (%s) was not found' % repr(t_name))
###Output
_____no_output_____
###Markdown
Here is an example of how to use both. We fill a table `inventory` with four columns: `year`, `kind`, `company`, and `model`. Initially, our table is empty.
###Code
def sample_db():
db = DB()
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
return db
###Output
_____no_output_____
###Markdown
Using `table()`, we can retrieve the table definition as well as its contents.
###Code
db = sample_db()
db.table('inventory')
###Output
_____no_output_____
###Markdown
We also define `column()` for retrieving the column definition from a table declaration.
###Code
class DB(DB):
def column(self, table_decl, c_name):
if c_name in table_decl:
return table_decl[c_name]
raise SQLException('Column (%s) was not found' % repr(c_name))
db = sample_db()
decl, rows = db.table('inventory')
db.column(decl, 'year')
###Output
_____no_output_____
###Markdown
Executing SQL StatementsThe `sql()` method of `DB` executes SQL statements. It inspects its arguments, and dispatches the query based on the kind of SQL statement to be executed.
###Code
class DB(DB):
def do_select(self, query):
...
def do_update(self, query):
...
def do_insert(self, query):
...
def do_delete(self, query):
...
def sql(self, query):
methods = [('select ', self.do_select),
('update ', self.do_update),
('insert into ', self.do_insert),
('delete from', self.do_delete)]
for key, method in methods:
if query.startswith(key):
return method(query[len(key):])
raise SQLException('Unknown SQL (%s)' % query)
###Output
_____no_output_____
###Markdown
Here's an example of how to use the `DB` class:
###Code
some_db = DB()
some_db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
However, at this point, the individual methods for handling SQL statements are not yet defined. Let us do this in the next steps. Excursion: Implementing SQL Statements Selecting DataThe `do_select()` method handles SQL `select` statements to retrieve data from a table.
###Code
class DB(DB):
def do_select(self, query):
FROM, WHERE = ' from ', ' where '
table_start = query.find(FROM)
if table_start < 0:
raise SQLException('no table specified')
where_start = query.find(WHERE)
select = query[:table_start]
if where_start >= 0:
t_name = query[table_start + len(FROM):where_start]
where = query[where_start + len(WHERE):]
else:
t_name = query[table_start + len(FROM):]
where = ''
_, table = self.table(t_name)
if where:
selected = self.expression_clause(table, "(%s)" % where)
selected_rows = [hm for i, data, hm in selected if data]
else:
selected_rows = table
rows = self.expression_clause(selected_rows, "(%s)" % select)
return [data for i, data, hm in rows]
###Output
_____no_output_____
###Markdown
The `expression_clause()` method is used for two purposes:1. In the form `select` $x$, $y$, $z$ `from` $t$, it _evaluates_ (and returns) the expressions $x$, $y$, $z$ in the contexts of the selected rows.2. If a clause `where` $p$ is given, it also evaluates $p$ in the context of the rows and includes the rows in the selection only if $p$ holds.To evaluate expressions like $x$, $y$, $z$ or $p$, the method `expression_clause()` makes use of the Python `eval()` evaluation function.
###Code
class DB(DB):
def expression_clause(self, table, statement):
selected = []
for i, hm in enumerate(table):
selected.append((i, self.my_eval(statement, {}, hm), hm))
return selected
###Output
_____no_output_____
###Markdown
If `eval()` fails for whatever reason, we raise an exception:
###Code
class DB(DB):
def my_eval(self, statement, g, l):
try:
return eval(statement, g, l)
except Exception:
raise SQLException('Invalid WHERE (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
**Note:** Using `eval()` here introduces some important security issues, which we will discuss later in this chapter. Here's how we can use `sql()` to issue a query. Note that the table is yet empty.
###Code
db = sample_db()
db.sql('select year from inventory')
db = sample_db()
db.sql('select year from inventory where year == 2018')
###Output
_____no_output_____
###Markdown
Inserting DataThe `do_insert()` method handles SQL `insert` statements.
###Code
class DB(DB):
def do_insert(self, query):
VALUES = ' values '
table_end = query.find('(')
t_name = query[:table_end].strip()
names_end = query.find(')')
decls, table = self.table(t_name)
names = [i.strip() for i in query[table_end + 1:names_end].split(',')]
# verify columns exist
for k in names:
self.column(decls, k)
values_start = query.find(VALUES)
if values_start < 0:
raise SQLException('Invalid INSERT (%s)' % repr(query))
values = [
i.strip() for i in query[values_start + len(VALUES) + 1:-1].split(',')
]
if len(names) != len(values):
raise SQLException(
'names(%s) != values(%s)' % (repr(names), repr(values)))
# dict lookups happen in C code, so we can't use that
kvs = {}
for k,v in zip(names, values):
for key,kval in decls.items():
if k == key:
kvs[key] = self.convert(kval, v)
table.append(kvs)
###Output
_____no_output_____
###Markdown
In SQL, a column can come in any supported data type. To ensure it is stored using the type originally declared, we need the ability to convert the values to specific types which is provided by `convert()`.
###Code
import ast
class DB(DB):
def convert(self, cast, value):
try:
return cast(ast.literal_eval(value))
except:
raise SQLException('Invalid Conversion %s(%s)' % (cast, value))
###Output
_____no_output_____
###Markdown
Here is an example of how to use the SQL `insert` command:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.table('inventory')
###Output
_____no_output_____
###Markdown
With the database filled, we can also run more complex queries:
###Code
db.sql('select year + 1, kind from inventory')
db.sql('select year, kind from inventory where year == 1997')
###Output
_____no_output_____
###Markdown
Updating DataSimilarly, `do_update()` handles SQL `update` statements.
###Code
class DB(DB):
def do_update(self, query):
SET, WHERE = ' set ', ' where '
table_end = query.find(SET)
if table_end < 0:
raise SQLException('Invalid UPDATE (%s)' % repr(query))
set_end = table_end + 5
t_name = query[:table_end]
decls, table = self.table(t_name)
names_end = query.find(WHERE)
if names_end >= 0:
names = query[set_end:names_end]
where = query[names_end + len(WHERE):]
else:
names = query[set_end:]
where = ''
sets = [[i.strip() for i in name.split('=')]
for name in names.split(',')]
# verify columns exist
for k, v in sets:
self.column(decls, k)
if where:
selected = self.expression_clause(table, "(%s)" % where)
updated = [hm for i, d, hm in selected if d]
else:
updated = table
for hm in updated:
for k, v in sets:
# we can not do dict lookups because it is implemented in C.
for key, kval in decls.items():
if key == k:
hm[key] = self.convert(kval, v)
return "%d records were updated" % len(updated)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can update things:
###Code
db.sql('update inventory set year = 1998 where year == 1997')
db.sql('select year from inventory')
db.table('inventory')
###Output
_____no_output_____
###Markdown
Deleting DataFinally, SQL `delete` statements are handled by `do_delete()`.
###Code
class DB(DB):
def do_delete(self, query):
WHERE = ' where '
table_end = query.find(WHERE)
if table_end < 0:
raise SQLException('Invalid DELETE (%s)' % query)
t_name = query[:table_end].strip()
_, table = self.table(t_name)
where = query[table_end + len(WHERE):]
selected = self.expression_clause(table, "%s" % where)
deleted = [i for i, d, hm in selected if d]
for i in sorted(deleted, reverse=True):
del table[i]
return "%d records were deleted" % len(deleted)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can delete data:
###Code
db.sql('delete from inventory where company == "Ford"')
###Output
_____no_output_____
###Markdown
Our database is now empty:
###Code
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
End of Excursion Here is how our database can be used.
###Code
db = DB()
###Output
_____no_output_____
###Markdown
We first create a table in our database with the correct data types.
###Code
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
###Output
_____no_output_____
###Markdown
Here is a simple convenience function to update the table using our dataset.
###Code
def update_inventory(sqldb, vehicle):
inventory_def = sqldb.db['inventory'][0]
k, v = zip(*inventory_def.items())
val = [repr(cast(val)) for cast, val in zip(v, vehicle.split(','))]
sqldb.sql('insert into inventory (%s) values (%s)' % (','.join(k),
','.join(val)))
for V in VEHICLES:
update_inventory(db, V)
###Output
_____no_output_____
###Markdown
Our database now contains the same dataset as `VEHICLES` under `INVENTORY` table.
###Code
db.db
###Output
_____no_output_____
###Markdown
Here is a sample select statement.
###Code
db.sql('select year,kind from inventory')
db.sql("select company,model from inventory where kind == 'car'")
###Output
_____no_output_____
###Markdown
We can run updates on it.
###Code
db.sql("update inventory set year = 1998, company = 'Suzuki' where kind == 'van'")
db.db
###Output
_____no_output_____
###Markdown
It can even do mathematics on the fly!
###Code
db.sql('select int(year)+10 from inventory')
###Output
_____no_output_____
###Markdown
Adding a new row to our table.
###Code
db.sql("insert into inventory (year, kind, company, model) values (1, 'charriot', 'Rome', 'Quadriga')")
db.db
###Output
_____no_output_____
###Markdown
Which we then delete.
###Code
db.sql("delete from inventory where year < 1900")
###Output
_____no_output_____
###Markdown
Fuzzing SQLTo verify that everything is OK, let us fuzz. First we define our grammar. Excursion: Defining a SQL grammar
###Code
import string
from Grammars import START_SYMBOL, Grammar, Expansion, \
is_valid_grammar, extend_grammar
EXPR_GRAMMAR: Grammar = {
"<start>": ["<expr>"],
"<expr>": ["<bexpr>", "<aexpr>", "(<expr>)", "<term>"],
"<bexpr>": [
"<aexpr><lt><aexpr>",
"<aexpr><gt><aexpr>",
"<expr>==<expr>",
"<expr>!=<expr>",
],
"<aexpr>": [
"<aexpr>+<aexpr>", "<aexpr>-<aexpr>", "<aexpr>*<aexpr>",
"<aexpr>/<aexpr>", "<word>(<exprs>)", "<expr>"
],
"<exprs>": ["<expr>,<exprs>", "<expr>"],
"<lt>": ["<"],
"<gt>": [">"],
"<term>": ["<number>", "<word>"],
"<number>": ["<integer>.<integer>", "<integer>", "-<number>"],
"<integer>": ["<digit><integer>", "<digit>"],
"<word>": ["<word><letter>", "<word><digit>", "<letter>"],
"<digit>":
list(string.digits),
"<letter>":
list(string.ascii_letters + '_:.')
}
assert is_valid_grammar(EXPR_GRAMMAR)
PRINTABLE_CHARS: List[str] = [i for i in string.printable
if i not in "<>'\"\t\n\r\x0b\x0c\x00"] + ['<lt>', '<gt>']
INVENTORY_GRAMMAR = extend_grammar(EXPR_GRAMMAR,
{
'<start>': ['<query>'],
'<query>': [
'select <exprs> from <table>',
'select <exprs> from <table> where <bexpr>',
'insert into <table> (<names>) values (<literals>)',
'update <table> set <assignments> where <bexpr>',
'delete from <table> where <bexpr>',
],
'<table>': ['<word>'],
'<names>': ['<column>,<names>', '<column>'],
'<column>': ['<word>'],
'<literals>': ['<literal>', '<literal>,<literals>'],
'<literal>': ['<number>', "'<chars>'"],
'<assignments>': ['<kvp>,<assignments>', '<kvp>'],
'<kvp>': ['<column>=<value>'],
'<value>': ['<word>'],
'<chars>': ['<char>', '<char><chars>'],
'<char>': PRINTABLE_CHARS, # type: ignore
})
assert is_valid_grammar(INVENTORY_GRAMMAR)
###Output
_____no_output_____
###Markdown
As can be seen from the source of our database, the functions always check whether the table name is correct. Hence, we modify the grammar to choose our particular table so that it will have a better chance of reaching deeper. We will see in the later sections how this can be done automatically.
###Code
INVENTORY_GRAMMAR_F = extend_grammar(INVENTORY_GRAMMAR,
{'<table>': ['inventory']})
###Output
_____no_output_____
###Markdown
End of Excursion
###Code
from GrammarFuzzer import GrammarFuzzer
gf = GrammarFuzzer(INVENTORY_GRAMMAR_F)
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = db.sql(query)
print(repr(res))
except SQLException as e:
print("> ", e)
pass
except:
traceback.print_exc()
break
print()
###Output
'select O6fo,-977091.1,-36.46 from inventory'
> Invalid WHERE ('(O6fo,-977091.1,-36.46)')
'select g3 from inventory where -3.0!=V/g/b+Q*M*G'
> Invalid WHERE ('(-3.0!=V/g/b+Q*M*G)')
'update inventory set z=a,x=F_,Q=K where p(M)<_*S'
> Column ('z') was not found
'update inventory set R=L5pk where e*l*y-u>K+U(:)'
> Column ('R') was not found
'select _/d*Q+H/d(k)<t+M-A+P from inventory'
> Invalid WHERE ('(_/d*Q+H/d(k)<t+M-A+P)')
'select F5 from inventory'
> Invalid WHERE ('(F5)')
'update inventory set jWh.=a6 where wcY(M)>IB7(i)'
> Column ('jWh.') was not found
'update inventory set U=y where L(W<c,(U!=W))<V(((q)==m<F),O,l)'
> Column ('U') was not found
'delete from inventory where M/b-O*h*E<H-W>e(Y)-P'
> Invalid WHERE ('M/b-O*h*E<H-W>e(Y)-P')
'select ((kP(86)+b*S+J/Z/U+i(U))) from inventory'
> Invalid WHERE ('(((kP(86)+b*S+J/Z/U+i(U))))')
###Markdown
Fuzzing does not seem to have triggered any crashes. However, are crashes the only errors that we should be worried about? The Evil of EvalIn our database implementation – notably in the `expression_clause()` method -, we have made use of `eval()` to evaluate expressions using the Python interpreter. This allows us to unleash the full power of Python expressions within our SQL statements.
###Code
db.sql('select year from inventory where year < 2000')
###Output
_____no_output_____
###Markdown
In the above query, the clause `year < 2000` is evaluated using `expression_clause()` using Python in the context of each row; hence, `year < 2000` evaluates to either `True` or `False`. The same holds for the expressions being `select`ed:
###Code
db.sql('select year - 1900 if year < 2000 else year - 2000 from inventory')
###Output
_____no_output_____
###Markdown
This works because `year - 1900 if year < 2000 else year - 2000` is a valid Python expression. (It is not a valid SQL expression, though.) The problem with the above is that there is _no limitation_ to what the Python expression can do. What if the user tries the following?
###Code
db.sql('select __import__("os").popen("pwd").read() from inventory')
###Output
_____no_output_____
###Markdown
The above statement effectively reads from the users' file system. Instead of `os.popen("pwd").read()`, it could execute arbitrary Python commands – to access data, install software, run a background process. This is where "the full power of Python expressions" turns back on us. What we want is to allow our _program_ to make full use of its power; yet, the _user_ (or any third party) should not be entrusted to do the same. Hence, we need to differentiate between (trusted) _input from the program_ and (untrusted) _input from the user_. One method that allows such differentiation is that of *dynamic taint analysis*. The idea is to identify the functions that accept user input as *sources* that *taint* any string that comes in through them, and those functions that perform dangerous operations as *sinks*. Finally we bless certain functions as *taint sanitizers*. The idea is that an input from the source should never reach the sink without undergoing sanitization first. This allows us to use a stronger oracle than simply checking for crashes. Tracking String TaintsThere are various levels of taint tracking that one can perform. The simplest is to track that a string fragment originated in a specific environment, and has not undergone a taint removal process. For this, we simply need to wrap the original string with an environment identifier (the _taint_) with `tstr`, and produce `tstr` instances on each operation that results in another string fragment. The attribute `taint` holds a label identifying the environment this instance was derived. A Class for Tainted StringsFor capturing information flows we need a new string class. The idea is to use the new tainted string class `tstr` as a wrapper on the original `str` class. However, `str` is an *immutable* class. Hence, it does not call its `__init__()` method after being constructed. This means that any subclasses of `str` also will not get the `__init__()` method called. If we want to get our initialization routine called, we need to [hook into `__new__()`](https://docs.python.org/3/reference/datamodel.htmlbasic-customization) and return an instance of our own class. We combine this with our initialization code in `__init__()`.
###Code
class tstr(str):
"""Wrapper for strings, saving taint information"""
def __new__(cls, value, *args, **kw):
"""Create a tstr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `tstr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings."""
self.taint: Any = taint
class tstr(tstr):
def __repr__(self) -> tstr:
"""Return a representation."""
return tstr(str.__repr__(self), taint=self.taint)
class tstr(tstr):
def __str__(self) -> str:
"""Convert to string"""
return str.__str__(self)
###Output
_____no_output_____
###Markdown
For example, if we wrap `"hello"` in `tstr`, then we should be able to access its taint:
###Code
thello: tstr = tstr('hello', taint='LOW')
thello.taint
repr(thello).taint # type: ignore
###Output
_____no_output_____
###Markdown
By default, when we wrap a string, it is tainted. Hence we also need a way to clear the taint in the string. One way is to simply return a `str` instance as above. However, one may sometimes wish to remove the taint from an existing instance. This is accomplished with `clear_taint()`. During `clear_taint()`, we simply set the taint to `None`. This method comes with a pair method `has_taint()` which checks whether a `tstr` instance is currently origined.
###Code
class tstr(tstr):
def clear_taint(self):
"""Remove taint"""
self.taint = None
return self
def has_taint(self):
"""Check if taint is present"""
return self.taint is not None
###Output
_____no_output_____
###Markdown
String OperatorsTo propagate the taint, we have to extend string functions, such as operators. We can do so in one single big step, overloading all string methods and operators. When we create a new string from an existing tainted string, we propagate its taint.
###Code
class tstr(tstr):
def create(self, s):
return tstr(s, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_str_wrapper()` function creates a wrapper around an existing string method which attaches the taint to the result of the method:
###Code
class tstr(tstr):
@staticmethod
def make_str_wrapper(fun):
"""Make `fun` (a `str` method) a method in `tstr`"""
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
return self.create(res)
if hasattr(fun, '__doc__'):
# Copy docstring
proxy.__doc__ = fun.__doc__
return proxy
###Output
_____no_output_____
###Markdown
We do this for all string methods that return a string:
###Code
def informationflow_init_1():
for name in ['__format__', '__mod__', '__rmod__', '__getitem__',
'__add__', '__mul__', '__rmul__',
'capitalize', 'casefold', 'center', 'encode',
'expandtabs', 'format', 'format_map', 'join',
'ljust', 'lower', 'lstrip', 'replace',
'rjust', 'rstrip', 'strip', 'swapcase', 'title', 'translate', 'upper']:
fun = getattr(str, name)
setattr(tstr, name, tstr.make_str_wrapper(fun))
informationflow_init_1()
INITIALIZER_LIST = [informationflow_init_1]
def initialize():
for fn in INITIALIZER_LIST:
fn()
###Output
_____no_output_____
###Markdown
The one missing operator is `+` with a regular string on the left side and a tainted string on the right side. Python supports a `__radd__()` method which is invoked if the associated object is used on the right side of an addition.
###Code
class tstr(tstr):
def __radd__(self, value):
"""Return value + self, as a `tstr` object"""
return self.create(value + str(self))
###Output
_____no_output_____
###Markdown
With this, we are already done. Let us create a string `thello` with a taint `LOW`.
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any substring will also be tainted:
###Code
thello[0].taint # type: ignore
thello[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
String additions will return a `tstr` object with the taint:
###Code
(tstr('foo', taint='HIGH') + 'bar').taint # type: ignore
###Output
_____no_output_____
###Markdown
Our `__radd__()` method ensures this also works if the `tstr` occurs on the right side of a string addition:
###Code
('foo' + tstr('bar', taint='HIGH')).taint # type: ignore
thello += ', world' # type: ignore
thello.taint # type: ignore
###Output
_____no_output_____
###Markdown
Other operators such as multiplication also work:
###Code
(thello * 5).taint # type: ignore
('hw %s' % thello).taint # type: ignore
(tstr('hello %s', taint='HIGH') % 'world').taint # type: ignore
###Output
_____no_output_____
###Markdown
Tracking Untrusted InputSo, what can one do with tainted strings? We reconsider the `DB` example. We define a "better" `TrustedDB` which only accepts strings tainted as `"TRUSTED"`.
###Code
class TrustedDB(DB):
def sql(self, s):
assert isinstance(s, tstr), "Need a tainted string"
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
return super().sql(s)
###Output
_____no_output_____
###Markdown
Feeding a string with an "unknown" (i.e., non-existing) trust level will cause `TrustedDB` to fail:
###Code
bdb = TrustedDB(db.db)
from ExpectError import ExpectError
with ExpectError():
bdb.sql("select year from INVENTORY")
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58879/3935989889.py", line 2, in <module>
bdb.sql("select year from INVENTORY")
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58879/995123203.py", line 3, in sql
assert isinstance(s, tstr), "Need a tainted string"
AssertionError: Need a tainted string (expected)
###Markdown
Additionally any user input would be originally tagged with `"UNTRUSTED"` as taint. If we place an untrusted string into our better calculator, it will also fail:
###Code
bad_user_input = tstr('__import__("os").popen("ls").read()', taint='UNTRUSTED')
with ExpectError():
bdb.sql(bad_user_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58879/3307042773.py", line 3, in <module>
bdb.sql(bad_user_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58879/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
Hence, somewhere along the computation, we have to turn the "untrusted" inputs into "trusted" strings. This process is called *sanitization*. A simple sanitization function for our purposes could ensure that the input consists only of few allowed characters (not including letters or quotes); if this is the case, then the input gets a new `"TRUSTED"` taint. If not, we turn the string into an (untrusted) empty string; other alternatives would be to raise an error or to escape or delete "untrusted" characters.
###Code
import re
def sanitize(user_input):
assert isinstance(user_input, tstr)
if re.match(
r'^select +[-a-zA-Z0-9_, ()]+ from +[-a-zA-Z0-9_, ()]+$', user_input):
return tstr(user_input, taint='TRUSTED')
else:
return tstr('', taint='UNTRUSTED')
good_user_input = tstr("select year,model from inventory", taint='UNTRUSTED')
sanitized_input = sanitize(good_user_input)
sanitized_input
sanitized_input.taint
bdb.sql(sanitized_input)
###Output
_____no_output_____
###Markdown
Let us now try out our untrusted input:
###Code
sanitized_input = sanitize(bad_user_input)
sanitized_input
sanitized_input.taint
with ExpectError():
bdb.sql(sanitized_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58879/249000876.py", line 2, in <module>
bdb.sql(sanitized_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58879/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
Tracking Information FlowWe have explored how one could generate better inputs that can penetrate deeper into the program in question. While doing so, we have relied on program crashes to tell us that we have succeeded in finding problems in the program. However, that is rather simplistic. What if the behavior of the program is simply incorrect, but does not lead to a crash? Can one do better?In this chapter, we explore in depth how to track information flows in Python, and how these flows can be used to determine whether a program behaved as expected. **Prerequisites*** You should have read the [chapter on coverage](Coverage.ipynb).* You should have read the [chapter on probabilistic fuzzing](ProbabilisticGrammarFuzzer.ipynb). We first set up our infrastructure so that we can make use of previously defined functions.
###Code
import fuzzingbook_utils
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.InformationFlow import ```and then make use of the following features.This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string.For tracking information on security properties, use `tstr` as follows:```python>>> thello = tstr('hello', taint='LOW')```Now, any operation from `thello` that results in a string fragment would include the correct taint. For example:```python>>> thello[1:2].taint'LOW'```For tracking the originating indexes from the input string, use `ostr` as follows:```python>>> ohw = ostr("hello\tworld", origin=100)```The originating indexes can be recovered as follows:```python>>> (ohw[0:4] +"-"+ ohw[6:]).origin[100, 101, 102, 103, -1, 106, 107, 108, 109, 110]``` A Vulnerable DatabaseSay we want to implement an *in-memory database* service in Python. Here is a rather flimsy attempt. We use the following dataset.
###Code
INVENTORY = """\
1997,van,Ford,E350
2000,car,Mercury,Cougar
1999,car,Chevy,Venture\
"""
VEHICLES = INVENTORY.split('\n')
###Output
_____no_output_____
###Markdown
Our DB is a Python class that parses its arguments and throws `SQLException` which is defined below.
###Code
class SQLException(Exception):
pass
###Output
_____no_output_____
###Markdown
The database is simply a Python `dict` that is exposed only through SQL queries.
###Code
class DB:
def __init__(self, db={}):
self.db = dict(db)
###Output
_____no_output_____
###Markdown
Representing TablesThe database contains tables, which are created by a method call `create_table()`. Each table data structure is a pair of values. The first one is the meta data containing column names and types. The second value is a list of values in the table.
###Code
class DB(DB):
def create_table(self, table, defs):
self.db[table] = (defs, [])
###Output
_____no_output_____
###Markdown
The table can be retrieved using the name using the `table()` method call.
###Code
class DB(DB):
def table(self, t_name):
if t_name in self.db:
return self.db[t_name]
raise SQLException('Table (%s) was not found' % repr(t_name))
###Output
_____no_output_____
###Markdown
Here is an example of how to use both. We fill a table `inventory` with four columns: `year`, `kind`, `company`, and `model`. Initially, our table is empty.
###Code
def sample_db():
db = DB()
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
return db
###Output
_____no_output_____
###Markdown
Using `table()`, we can retrieve the table definition as well as its contents.
###Code
db = sample_db()
db.table('inventory')
###Output
_____no_output_____
###Markdown
We also define `column()` for retrieving the column definition from a table declaration.
###Code
class DB(DB):
def column(self, table_decl, c_name):
if c_name in table_decl:
return table_decl[c_name]
raise SQLException('Column (%s) was not found' % repr(c_name))
db = sample_db()
decl, rows = db.table('inventory')
db.column(decl, 'year')
###Output
_____no_output_____
###Markdown
Executing SQL StatementsThe `sql()` method of `DB` executes SQL statements. It inspects its arguments, and dispatches the query based on the kind of SQL statement to be executed.
###Code
class DB(DB):
def do_select(self, query):
assert False
def do_update(self, query):
assert False
def do_insert(self, query):
assert False
def do_delete(self, query):
assert False
def sql(self, query):
methods = [('select ', self.do_select),
('update ', self.do_update),
('insert into ', self.do_insert),
('delete from', self.do_delete)]
for key, method in methods:
if query.startswith(key):
return method(query[len(key):])
raise SQLException('Unknown SQL (%s)' % query)
###Output
_____no_output_____
###Markdown
At this point, the individual methods for handling SQL statements are not yet defined. Let us do this in the next steps. Selecting DataThe `do_select()` method handles SQL `select` statements to retrieve data from a table.
###Code
class DB(DB):
def do_select(self, query):
FROM, WHERE = ' from ', ' where '
table_start = query.find(FROM)
if table_start < 0:
raise SQLException('no table specified')
where_start = query.find(WHERE)
select = query[:table_start]
if where_start >= 0:
t_name = query[table_start + len(FROM):where_start]
where = query[where_start + len(WHERE):]
else:
t_name = query[table_start + len(FROM):]
where = ''
_, table = self.table(t_name)
if where:
selected = self.expression_clause(table, "(%s)" % where)
selected_rows = [hm for i, data, hm in selected if data]
else:
selected_rows = table
rows = self.expression_clause(selected_rows, "(%s)" % select)
return [data for i, data, hm in rows]
###Output
_____no_output_____
###Markdown
The `expression_clause()` method is used for two purposes:1. In the form `select` $x$, $y$, $z$ `from` $t$, it _evaluates_ (and returns) the expressions $x$, $y$, $z$ in the contexts of the selected rows.2. If a clause `where` $p$ is given, it also evaluates $p$ in the context of the rows and includes the rows in the selection only if $p$ holds.To evaluate expressions like $x$, $y$, $z$ or $p$, we make use of the Python evaluation function.
###Code
class DB(DB):
def expression_clause(self, table, statement):
selected = []
for i, hm in enumerate(table):
selected.append((i, self.my_eval(statement, {}, hm), hm))
return selected
###Output
_____no_output_____
###Markdown
Which internally calls `my_eval()` to evaluate any given statement.
###Code
class DB(DB):
def my_eval(self, statement, g, l):
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid WHERE (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
**Note:** Using `eval()` here introduces some important security issues, which we will discuss later in this chapter. Here's how we can use `sql()` to issue a query. Note that the table is yet empty.
###Code
db = sample_db()
db.sql('select year from inventory')
db = sample_db()
db.sql('select year from inventory where year == 2018')
###Output
_____no_output_____
###Markdown
Inserting DataThe `do_insert()` method handles SQL `insert` statements.
###Code
class DB(DB):
def do_insert(self, query):
VALUES = ' values '
table_end = query.find('(')
t_name = query[:table_end].strip()
names_end = query.find(')')
decls, table = self.table(t_name)
names = [i.strip() for i in query[table_end + 1:names_end].split(',')]
# verify columns exist
for k in names:
self.column(decls, k)
values_start = query.find(VALUES)
if values_start < 0:
raise SQLException('Invalid INSERT (%s)' % repr(query))
values = [
i.strip() for i in query[values_start + len(VALUES) + 1:-1].split(',')
]
if len(names) != len(values):
raise SQLException(
'names(%s) != values(%s)' % (repr(names), repr(values)))
# dict lookups happen in C code, so we cant use that
kvs = {}
for k,v in zip(names, values):
for key,kval in decls.items():
if k == key:
kvs[key] = self.convert(kval, v)
table.append(kvs)
###Output
_____no_output_____
###Markdown
In SQL, a column can come in any supported data type. To ensure it is stored using the type originally declared, we need the ability to convert the values to specific types which is provided by `convert()`.
###Code
import ast
class DB(DB):
def convert(self, cast, value):
try:
return cast(ast.literal_eval(value))
except:
raise SQLException('Invalid Conversion %s(%s)' % (cast, value))
###Output
_____no_output_____
###Markdown
Here is an example of how to use the SQL `insert` command:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.table('inventory')
###Output
_____no_output_____
###Markdown
With the database filled, we can also run more complex queries:
###Code
db.sql('select year + 1, kind from inventory')
db.sql('select year, kind from inventory where year == 1997')
###Output
_____no_output_____
###Markdown
Updating DataSimilarly, `do_update()` handles SQL `update` statements.
###Code
class DB(DB):
def do_update(self, query):
SET, WHERE = ' set ', ' where '
table_end = query.find(SET)
if table_end < 0:
raise SQLException('Invalid UPDATE (%s)' % repr(query))
set_end = table_end + 5
t_name = query[:table_end]
decls, table = self.table(t_name)
names_end = query.find(WHERE)
if names_end >= 0:
names = query[set_end:names_end]
where = query[names_end + len(WHERE):]
else:
names = query[set_end:]
where = ''
sets = [[i.strip() for i in name.split('=')]
for name in names.split(',')]
# verify columns exist
for k, v in sets:
self.column(decls, k)
if where:
selected = self.expression_clause(table, "(%s)" % where)
updated = [hm for i, d, hm in selected if d]
else:
updated = table
for hm in updated:
for k, v in sets:
# we can not do dict lookups because it is implemetned in C.
for key, kval in decls.items():
if key == k:
hm[key] = self.convert(kval, v)
return "%d records were updated" % len(updated)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can update things:
###Code
db.sql('update inventory set year = 1998 where year == 1997')
db.sql('select year from inventory')
db.table('inventory')
###Output
_____no_output_____
###Markdown
Deleting DataFinally, SQL `delete` statements are handled by `do_delete()`.
###Code
class DB(DB):
def do_delete(self, query):
WHERE = ' where '
table_end = query.find(WHERE)
if table_end < 0:
raise SQLException('Invalid DELETE (%s)' % query)
t_name = query[:table_end].strip()
_, table = self.table(t_name)
where = query[table_end + len(WHERE):]
selected = self.expression_clause(table, "%s" % where)
deleted = [i for i, d, hm in selected if d]
for i in sorted(deleted, reverse=True):
del table[i]
return "%d records were deleted" % len(deleted)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can delete data:
###Code
db.sql('delete from inventory where company == "Ford"')
###Output
_____no_output_____
###Markdown
Our database is now empty:
###Code
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
All Methods TogetherHere is how our database can be used.
###Code
db = DB()
###Output
_____no_output_____
###Markdown
Again, we first create a table in our database with the correct data types.
###Code
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
###Output
_____no_output_____
###Markdown
Here is a simple convenience function to update the table using our dataset.
###Code
def update_inventory(sqldb, vehicle):
inventory_def = sqldb.db['inventory'][0]
k, v = zip(*inventory_def.items())
val = [repr(cast(val)) for cast, val in zip(v, vehicle.split(','))]
sqldb.sql('insert into inventory (%s) values (%s)' % (','.join(k),
','.join(val)))
for V in VEHICLES:
update_inventory(db, V)
###Output
_____no_output_____
###Markdown
Our database now contains the same dataset as `VEHICLES` under `INVENTORY` table.
###Code
db.db
###Output
_____no_output_____
###Markdown
Here is a sample select statement.
###Code
db.sql('select year,kind from inventory')
db.sql("select company,model from inventory where kind == 'car'")
###Output
_____no_output_____
###Markdown
We can run updates on it.
###Code
db.sql("update inventory set year = 1998, company = 'Suzuki' where kind == 'van'")
db.db
###Output
_____no_output_____
###Markdown
It can even do mathematics on the fly!
###Code
db.sql('select int(year)+10 from inventory')
###Output
_____no_output_____
###Markdown
Adding a new row to our table.
###Code
db.sql("insert into inventory (year, kind, company, model) values (1, 'charriot', 'Rome', 'Quadriga')")
db.db
###Output
_____no_output_____
###Markdown
Which we then delete.
###Code
db.sql("delete from inventory where year < 1900")
###Output
_____no_output_____
###Markdown
Fuzzing SQLTo verify that everything is OK, let us fuzz. First we define our grammar.
###Code
import string
EXPR_GRAMMAR = {
"<start>": ["<expr>"],
"<expr>": ["<bexpr>", "<aexpr>", "(<expr>)", "<term>"],
"<bexpr>": [
"<aexpr><lt><aexpr>",
"<aexpr><gt><aexpr>",
"<expr>==<expr>",
"<expr>!=<expr>",
],
"<aexpr>": [
"<aexpr>+<aexpr>", "<aexpr>-<aexpr>", "<aexpr>*<aexpr>",
"<aexpr>/<aexpr>", "<word>(<exprs>)", "<expr>"
],
"<exprs>": ["<expr>,<exprs>", "<expr>"],
"<lt>": ["<"],
"<gt>": [">"],
"<term>": ["<number>", "<word>"],
"<number>": ["<integer>.<integer>", "<integer>", "-<number>"],
"<integer>": ["<digit><integer>", "<digit>"],
"<word>": ["<word><letter>", "<word><digit>", "<letter>"],
"<digit>":
list(string.digits),
"<letter>":
list(string.ascii_letters + '_:.')
}
INVENTORY_GRAMMAR = dict(
EXPR_GRAMMAR, **{
'<start>': ['<query>'],
'<query>': [
'select <exprs> from <table>',
'select <exprs> from <table> where <bexpr>',
'insert into <table> (<names>) values (<literals>)',
'update <table> set <assignments> where <bexpr>',
'delete from <table> where <bexpr>',
],
'<table>': ['<word>'],
'<names>': ['<column>,<names>', '<column>'],
'<column>': ['<word>'],
'<literals>': ['<literal>', '<literal>,<literals>'],
'<literal>': ['<number>', "'<chars>'"],
'<assignments>': ['<kvp>,<assignments>', '<kvp>'],
'<kvp>': ['<column>=<value>'],
'<value>': ['<word>'],
'<chars>': ['<char>', '<char><chars>'],
'<char>':
[i for i in string.printable if i not in "<>'\"\t\n\r\x0b\x0c\x00"
] + ['<lt>', '<gt>'],
})
###Output
_____no_output_____
###Markdown
As can be seen from the source of our database, the functions always check whether the table name is correct. Hence, we modify the grammar to choose our particular table so that it will have a better chance of reaching deeper. We will see in the later sections how this can be done automatically.
###Code
INVENTORY_GRAMMAR_F = dict(INVENTORY_GRAMMAR, **{'<table>': ['inventory']})
from GrammarFuzzer import GrammarFuzzer
gf = GrammarFuzzer(INVENTORY_GRAMMAR_F)
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = db.sql(query)
print(repr(res))
except SQLException as e:
print("> ", e)
pass
except:
traceback.print_exc()
break
print()
###Output
'select O6fo,-977091.1,-36.46 from inventory'
> Invalid WHERE ('(O6fo,-977091.1,-36.46)')
'select g3 from inventory where -3.0!=V/g/b+Q*M*G'
> Invalid WHERE ('(-3.0!=V/g/b+Q*M*G)')
'update inventory set z=a,x=F_,Q=K where p(M)<_*S'
> Column ('z') was not found
'update inventory set R=L5pk where e*l*y-u>K+U(:)'
> Column ('R') was not found
'select _/d*Q+H/d(k)<t+M-A+P from inventory'
> Invalid WHERE ('(_/d*Q+H/d(k)<t+M-A+P)')
'select F5 from inventory'
> Invalid WHERE ('(F5)')
'update inventory set jWh.=a6 where wcY(M)>IB7(i)'
> Column ('jWh.') was not found
'update inventory set U=y where L(W<c,(U!=W))<V(((q)==m<F),O,l)'
> Column ('U') was not found
'delete from inventory where M/b-O*h*E<H-W>e(Y)-P'
> Invalid WHERE ('M/b-O*h*E<H-W>e(Y)-P')
'select ((kP(86)+b*S+J/Z/U+i(U))) from inventory'
> Invalid WHERE ('(((kP(86)+b*S+J/Z/U+i(U))))')
###Markdown
Fuzzing does not seem to have triggered any crashes. However, are crashes the only errors that we should be worried about? The Evil of EvalIn our implementation, we have made use of `eval()` to evaluate expressions using the Python interpreter. This allows us to unleash the full power of Python expressions within our SQL statements.
###Code
db.sql('select year from inventory where year < 2000')
###Output
_____no_output_____
###Markdown
In the above query, the clause `year < 2000` is evaluated using `expression_clause()` using Python in the context of each row; hence, `year < 2000` evaluates to either `True` or `False`. The same holds for the expressions being `select`ed:
###Code
db.sql('select year - 1900 if year < 2000 else year - 2000 from inventory')
###Output
_____no_output_____
###Markdown
This works because `year - 1900 if year < 2000 else year - 2000` is a valid Python expression. (It is not a valid SQL expression, though.) The problem with the above is that there is _no limitation_ to what the Python expression can do. What if the user tries the following?
###Code
db.sql('select __import__("os").popen("pwd").read() from inventory')
###Output
_____no_output_____
###Markdown
The above statement effectively reads from the users' file system. Instead of `os.popen("pwd").read()`, it could execute arbitrary Python commands – to access data, install software, run a background process. This is where "the full power of Python expressions" turns back on us. What we want is to allow our _program_ to make full use of its power; yet, the _user_ (or any third party) should not be entrusted to do the same. Hence, we need to differentiate between (trusted) _input from the program_ and (untrusted) _input from the user_. One method that allows such differentiation is that of *dynamic taint analysis*. The idea is to identify the functions that accept user input as *sources* that *taint* any string that comes in through them, and those functions that perform dangerous operations as *sinks*. Finally we bless certain functions as *taint sanitizers*. The idea is that an input from the source should never reach the sink without undergoing sanitization first. This allows us to use a stronger oracle than simply checking for crashes. Tracking String TaintsThere are various levels of taint tracking that one can perform. The simplest is to track that a string fragment originated in a specific environment, and has not undergone a taint removal process. For this, we simply need to wrap the original string with an environment identifier (the _taint_) with `tstr`, and produce `tstr` instances on each operation that results in another string fragment. The attribute `taint` holds a label identifying the environment this instance was derived. A Class for Tainted StringsFor capturing information flows we need a new string class. The idea is to use the new tainted string class `tstr` as a wrapper on the original `str` class. However, `str` is an *immutable* class. Hence, it does not call its `__init__()` method after being constructed. This means that any subclasses of `str` also will not get the `__init__()` method called. If we want to get our initialization routine called, we need to [hook into `__new__()`](https://docs.python.org/3/reference/datamodel.htmlbasic-customization) and return an instance of our own class. We combine this with our initialization code in `__init__()`.
###Code
class tstr(str):
def __new__(cls, value, *args, **kw):
return str.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
class tstr(tstr):
def __repr__(self):
return tstr(str.__repr__(self), taint=self.taint)
class tstr(tstr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
For example, if we wrap `"hello"` in `tstr`, then we should be able to access its taint:
###Code
thello = tstr('hello', taint='LOW')
thello.taint
repr(thello).taint
###Output
_____no_output_____
###Markdown
By default, when we wrap a string, it is tainted. Hence we also need a way to clear the taint in the string. One way is to simply return a `str` instance as above. However, one may sometimes wish to remove the taint from an existing instance. This is accomplished with `clear_taint()`. During `clear_taint()`, we simply set the taint to `None`. This method comes with a pair method `has_taint()` which checks whether a `tstr` instance is currently origined.
###Code
class tstr(tstr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
###Output
_____no_output_____
###Markdown
String OperatorsTo propagate the taint, we have to extend string functions, such as operators. We can do so in one single big step, overloading all string methods and operators. When we create a new string from an existing tainted string, we propagate its taint.
###Code
class tstr(tstr):
def create(self, s):
return tstr(s, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_str_wrapper()` function creates a wrapper around an existing string method which attaches the taint to the result of the method:
###Code
def make_str_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all string methods that return a string:
###Code
def informationflow_init_1():
for name in ['__format__', '__mod__', '__rmod__', '__getitem__', '__add__', '__mul__', '__rmul__',
'capitalize', 'casefold', 'center', 'encode',
'expandtabs', 'format', 'format_map', 'join', 'ljust', 'lower', 'lstrip', 'replace',
'rjust', 'rstrip', 'strip', 'swapcase', 'title', 'translate', 'upper']:
fun = getattr(str, name)
setattr(tstr, name, make_str_wrapper(fun))
informationflow_init_1()
INITIALIZER_LIST = [informationflow_init_1]
def initialize():
for fn in INITIALIZER_LIST:
fn()
###Output
_____no_output_____
###Markdown
The one missing operator is `+` with a regular string on the left side and a tainted string on the right side. Python supports a `__radd__()` method which is invoked if the associated object is used on the right side of an addition.
###Code
class tstr(tstr):
def __radd__(self, s):
return self.create(s + str(self))
###Output
_____no_output_____
###Markdown
With this, we are already done. Let us create a string `thello` with a taint `LOW`.
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any substring will also be tainted:
###Code
thello[0].taint
thello[1:3].taint
###Output
_____no_output_____
###Markdown
String additions will return a `tstr` object with the taint:
###Code
(tstr('foo', taint='HIGH') + 'bar').taint
###Output
_____no_output_____
###Markdown
Our `__radd__()` method ensures this also works if the `tstr` occurs on the right side of a string addition:
###Code
('foo' + tstr('bar', taint='HIGH')).taint
thello += ', world'
thello.taint
###Output
_____no_output_____
###Markdown
Other operators such as multiplication also work:
###Code
(thello * 5).taint
('hw %s' % thello).taint
(tstr('hello %s', taint='HIGH') % 'world').taint
import string
###Output
_____no_output_____
###Markdown
Tracking Untrusted InputSo, what can one do with tainted strings? We reconsider the `DB` example. We define a "better" `TrustedDB` which only accepts strings tainted as `"TRUSTED"`.
###Code
class TrustedDB(DB):
def sql(self, s):
assert isinstance(s, tstr), "Need a tainted string"
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
return super().sql(s)
###Output
_____no_output_____
###Markdown
Feeding a string with an "unknown" (i.e., non-existing) trust level will cause `TrustedDB` to fail:
###Code
bdb = TrustedDB(db.db)
from ExpectError import ExpectError
with ExpectError():
bdb.sql("select year from INVENTORY")
###Output
Traceback (most recent call last):
File "<ipython-input-82-65a521f9999f>", line 2, in <module>
bdb.sql("select year from INVENTORY")
File "<ipython-input-79-53a654b6cc10>", line 3, in sql
assert isinstance(s, tstr), "Need a tainted string"
AssertionError: Need a tainted string (expected)
###Markdown
Additionally any user input would be originally tagged with `"UNTRUSTED"` as taint. If we place an untrusted string into our better calculator, it will also fail:
###Code
bad_user_input = tstr('__import__("os").popen("ls").read()', taint='UNTRUSTED')
with ExpectError():
bdb.sql(bad_user_input)
###Output
Traceback (most recent call last):
File "<ipython-input-83-82c5b2d628ed>", line 3, in <module>
bdb.sql(bad_user_input)
File "<ipython-input-79-53a654b6cc10>", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
Hence, somewhere along the computation, we have to turn the "untrusted" inputs into "trusted" strings. This process is called *sanitization*. A simple sanitization function for our purposes could ensure that the input consists only of few allowed characters (not including letters or quotes); if this is the case, then the input gets a new `"TRUSTED"` taint. If not, we turn the string into an (untrusted) empty string; other alternatives would be to raise an error or to escape or delete "untrusted" characters.
###Code
import re
def sanitize(user_input):
assert isinstance(user_input, tstr)
if re.match(
r'^select +[-a-zA-Z0-9_, ()]+ from +[-a-zA-Z0-9_, ()]+$', user_input):
return tstr(user_input, taint='TRUSTED')
else:
return tstr('', taint='UNTRUSTED')
good_user_input = tstr("select year,model from inventory", taint='UNTRUSTED')
sanitized_input = sanitize(good_user_input)
sanitized_input
sanitized_input.taint
bdb.sql(sanitized_input)
###Output
_____no_output_____
###Markdown
Let us now try out our untrusted input:
###Code
sanitized_input = sanitize(bad_user_input)
sanitized_input
sanitized_input.taint
with ExpectError():
bdb.sql(sanitized_input)
###Output
Traceback (most recent call last):
File "<ipython-input-91-e59f9e5c9d30>", line 2, in <module>
bdb.sql(sanitized_input)
File "<ipython-input-79-53a654b6cc10>", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
In a similar fashion, we can prevent SQL and code injections discussed in [the chapter on Web fuzzing](WebFuzzer.ipynb). Taint Aware FuzzingWe can also use tainting to _direct fuzzing to those grammar rules that are likely to generate dangerous inputs._ The idea here is to identify inputs generated by our fuzzer that lead to untrusted execution. First we define the exception to be thrown when a tainted value reaches a dangerous operation.
###Code
class Tainted(Exception):
def __init__(self, v):
self.v = v
def __str__(self):
return 'Tainted[%s]' % self.v
###Output
_____no_output_____
###Markdown
TaintedDBNext, since `my_eval()` is the most dangerous operation in the `DB` class, we define a new class `TaintedDB` that overrides the `my_eval()` to throw an exception whenever an untrusted string reaches this part.
###Code
class TaintedDB(DB):
def my_eval(self, statement, g, l):
if statement.taint != 'TRUSTED':
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
We initialize an instance of `TaintedDB`
###Code
tdb = TaintedDB()
tdb.db = db.db
###Output
_____no_output_____
###Markdown
Then we start fuzzing.
###Code
import traceback
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = tdb.sql(tstr(query, taint='UNTRUSTED'))
print(repr(res))
except SQLException as e:
pass
except Tainted as e:
print("> ", e)
except:
traceback.print_exc()
break
print()
###Output
'delete from inventory where y/u-l+f/y<Y(c)/A-H*q'
> Tainted[y/u-l+f/y<Y(c)/A-H*q]
"insert into inventory (G,Wmp,sl3hku3) values ('<','?')"
"insert into inventory (d0) values (',_G')"
'select P*Q-w/x from inventory where X<j==:==j*r-f'
> Tainted[(X<j==:==j*r-f)]
'select a>F*i from inventory where Q/I-_+P*j>.'
> Tainted[(Q/I-_+P*j>.)]
'select (V-i<T/g) from inventory where T/r/G<FK(m)/(i)'
> Tainted[(T/r/G<FK(m)/(i))]
'select (((i))),_(S,_)/L-k<H(Sv,R,n,W,Y) from inventory'
> Tainted[((((i))),_(S,_)/L-k<H(Sv,R,n,W,Y))]
'select (N==c*U/P/y),i-e/n*y,T!=w,u from inventory'
> Tainted[((N==c*U/P/y),i-e/n*y,T!=w,u)]
'update inventory set _=B,n=v where o-p*k-J>T'
'select s from inventory where w4g4<.m(_)/_>t'
> Tainted[(w4g4<.m(_)/_>t)]
###Markdown
One can see that `insert`, `update`, `select` and `delete` statements on an existing table lead to taint exceptions. We can now focus on these specific kinds of inputs. However, this is not the only thing we can do. We will see how we can identify specific portions of input that reached tainted execution using character origins in the later sections. But before that, we explore other uses of taints. Preventing Privacy LeaksUsing taints, we can also ensure that secret information does not leak out. We can assign a special taint `"SECRET"` to strings whose information must not leak out:
###Code
secrets = tstr('<Plenty of secret keys>', taint='SECRET')
###Output
_____no_output_____
###Markdown
Accessing any substring of `secrets` will propagate the taint:
###Code
secrets[1:3].taint
###Output
_____no_output_____
###Markdown
Consider the _heartbeat_ security leak from [the chapter on Fuzzing](Fuzzer.ipynb), in which a server would accidentally reply not only the user input sent to it, but also secret memory. If the reply consists only of the user input, there is no taint associated with it:
###Code
user_input = "hello"
reply = user_input
isinstance(reply, tstr)
###Output
_____no_output_____
###Markdown
If, however, the reply contains _any_ part of the secret, the reply will be tainted:
###Code
reply = user_input + secrets[0:5]
reply
reply.taint
###Output
_____no_output_____
###Markdown
The output function of our server would now ensure that the data sent back does not contain any secret information:
###Code
def send_back(s):
assert not isinstance(s, tstr) and not s.taint == 'SECRET'
...
with ExpectError():
send_back(reply)
###Output
Traceback (most recent call last):
File "<ipython-input-106-e02d8e55c3ba>", line 2, in <module>
send_back(reply)
File "<ipython-input-105-a105f7cd1cab>", line 2, in send_back
assert not isinstance(s, tstr) and not s.taint == 'SECRET'
AssertionError (expected)
###Markdown
Tracking Character OriginsOur `tstr` solution can help to identify information leaks – but it is by no means complete. If we actually take the `heartbeat()` implementation from [the chapter on Fuzzing](Fuzzer.ipynb), we will see that _any_ reply is marked as `SECRET` – even those not even accessing secret memory:
###Code
from Fuzzer import heartbeat
reply = heartbeat('hello', 5, memory=secrets)
reply.taint
###Output
_____no_output_____
###Markdown
Why is this? If we look into the implementation of `heartbeat()`, we will see that it first builds a long string `memory` from the (non-secret) reply and the (secret) memory, before returning the first characters from `memory`.```python Store reply in memory memory = reply + memory[len(reply):]```At this point, the whole memory still is tainted as `SECRET`, _including_ the non-secret part from `reply`. We may be able to circumvent the issue by tagging the `reply` as `PUBLIC` – but then, this taint would be in conflict with the `SECRET` tag of `memory`. What happens if we compose a string from two differently tainted strings?
###Code
thilo = tstr("High", taint='HIGH') + tstr("Low", taint='LOW')
###Output
_____no_output_____
###Markdown
It turns out that in this case, the `__add__()` method takes precedence over the `__radd__()` method, which means that the right-hand `"Low"` string is treated as a regular (non-tainted) string.
###Code
thilo
thilo.taint
###Output
_____no_output_____
###Markdown
We could set up the `__add__()` and other methods with special handling for conflicting taints. However, the way this conflict should be resolved would be highly _application-dependent_:* If we use taints to indicate _privacy levels_, `SECRET` privacy should take precedence over `PUBLIC` privacy. Any combination of a `SECRET`-tainted string and a `PUBLIC`-tainted string thus should have a `SECRET` taint.* If we use taints to indicate _origins_ of information, an `UNTRUSTED` origin should take precedence over a `TRUSTED` origin. Any combination of an `UNTRUSTED`-tainted string and a `TRUSTED`-tainted string thus should have an `UNTRUSTED` taint.Of course, such conflict resolutions can be implemented. But even so, they will not help us in the `heartbeat()` example differentiating secret from non-secret output data. Tracking Individual CharactersFortunately, there is a better, more generic way to solve the above problems. The key to composition of differently tainted strings is to assign taints not only to strings, but actually to every bit of information – in our case, characters. If every character has a taint on its own, a new composition of characters will simply inherit this very taint _per character_. To this end, we introduce a second bit of information named _origin_. Distinguishing various untrusted sources may be accomplished by origining each instance as separate instance (called *colors* in dynamic origin research). You will see an instance of this technique in the chapter on [Grammar Mining](GrammarMiner.ipynb). In this section, we carry *character level* origins. That is, given a fragment that resulted from a portion of the original origined string, one will be able to tell which portion of the input string the fragment was taken from. In essence, each input character index from an origined source gets its own color. More complex origining such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter. A Class for Tracking Character OriginsLet us introduce a class `ostr` which, like `tstr`, carries a taint for each string, and additionally an _origin_ for each character that indicates its source. It is a consecutive number in a particular range (by default, starting with zero) indicating its _position_ within a specific origin.
###Code
class ostr(str):
DEFAULT_ORIGIN = 0
def __new__(cls, value, *args, **kw):
return str.__new__(cls, value)
def __init__(self, value, taint=None, origin=None, **kwargs):
self.taint = taint
if origin is None:
origin = ostr.DEFAULT_ORIGIN
if isinstance(origin, int):
self.origin = list(range(origin, origin + len(self)))
else:
self.origin = origin
assert len(self.origin) == len(self)
class ostr(ostr):
def create(self, s):
return ostr(s, taint=self.taint, origin=self.origin)
class ostr(ostr):
UNKNOWN_ORIGIN = -1
def __repr__(self):
# handle escaped chars
origin = [ostr.UNKNOWN_ORIGIN]
for s, o in zip(str(self), self.origin):
origin.extend([o] * (len(repr(s)) - 2))
origin.append(ostr.UNKNOWN_ORIGIN)
return ostr(str.__repr__(self), taint=self.taint, origin=origin)
class ostr(ostr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
By default, character origins start with `0`:
###Code
thello = ostr('hello')
assert thello.origin == [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
We can also specify the starting origin as below -- `6..10`
###Code
tworld = ostr('world', origin=6)
assert tworld.origin == [6, 7, 8, 9, 10]
a = ostr("hello\tworld")
repr(a).origin
###Output
_____no_output_____
###Markdown
`str()` returns a `str` instance without origin or taint information:
###Code
assert type(str(thello)) == str
###Output
_____no_output_____
###Markdown
`repr()`, however, keeps the origin information for the original string:
###Code
repr(thello)
repr(thello).origin
###Output
_____no_output_____
###Markdown
Just as with taints, we can clear origins and check whether an origin is present:
###Code
class ostr(ostr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
class ostr(ostr):
def clear_origin(self):
self.origin = [self.UNKNOWN_ORIGIN] * len(self)
return self
def has_origin(self):
return any(origin != self.UNKNOWN_ORIGIN for origin in self.origin)
thello = ostr('Hello')
assert thello.has_origin()
thello.clear_origin()
assert not thello.has_origin()
###Output
_____no_output_____
###Markdown
In the remainder of this section, we re-implement various string methods such that they also keep track of origins. If this is too tedious for you, jump right [to the next section](Checking-Origins) which gives a number of usage examples. CreateWe need to create new substrings that are wrapped in `ostr` objects. However, we also want to allow our subclasses to create their own instances. Hence we again provide a `create()` method that produces a new `ostr` instance.
###Code
class ostr(ostr):
def create(self, res, origin=None):
return ostr(res, taint=self.taint, origin=origin)
thello = ostr('hello', taint='HIGH')
tworld = thello.create('world', origin=6)
tworld.origin
tworld.taint
assert (thello.origin, tworld.origin) == (
[0, 1, 2, 3, 4], [6, 7, 8, 9, 10])
###Output
_____no_output_____
###Markdown
IndexIn Python, indexing is provided through `__getitem__()`. Indexing on positive integers is simple enough. However, it has two additional wrinkles. The first is that, if the index is negative, that many characters are counted from the end of the string which lies just after the last character. That is, the last character has a negative index `-1`
###Code
class ostr(ostr):
def __getitem__(self, key):
res = super().__getitem__(key)
if isinstance(key, int):
key = len(self) + key if key < 0 else key
return self.create(res, [self.origin[key]])
elif isinstance(key, slice):
return self.create(res, self.origin[key])
else:
assert False
hello = ostr('hello', taint='HIGH')
assert (hello[0], hello[-1]) == ('h', 'o')
hello[0].taint
###Output
_____no_output_____
###Markdown
The other wrinkle is that `__getitem__()` can accept a slice. We discuss this next. SlicesThe Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method, which returns a custom `iterator`.
###Code
class ostr(ostr):
def __iter__(self):
return ostr_iterator(self)
###Output
_____no_output_____
###Markdown
The `__iter__()` method requires a supporting `iterator` object. The `iterator` is used to save the state of the current iteration, which it does by keeping a reference to the original `ostr`, and the current index of iteration `_str_idx`.
###Code
class ostr_iterator():
def __init__(self, ostr):
self._ostr = ostr
self._str_idx = 0
def __next__(self):
if self._str_idx == len(self._ostr):
raise StopIteration
# calls ostr getitem should be ostr
c = self._ostr[self._str_idx]
assert isinstance(c, ostr)
self._str_idx += 1
return c
###Output
_____no_output_____
###Markdown
Bringing all these together:
###Code
thw = ostr('hello world', taint='HIGH')
thw[0:5]
assert thw[0:5].has_taint()
assert thw[0:5].has_origin()
thw[0:5].taint
thw[0:5].origin
###Output
_____no_output_____
###Markdown
Splits
###Code
def make_split_wrapper(fun):
def proxy(self, *args, **kwargs):
lst = fun(self, *args, **kwargs)
return [self.create(elem) for elem in lst]
return proxy
for name in ['split', 'rsplit', 'splitlines']:
fun = getattr(str, name)
setattr(ostr, name, make_split_wrapper(fun))
thello = ostr('hello world', taint='LOW')
thello == 'hello world'
thello.split()[0].taint
###Output
_____no_output_____
###Markdown
(Exercise for the reader: handle _partitions_, i.e., splitting a string by substrings) ConcatenationIf two origined strings are concatenated together, it may be desirable to transfer the origins from each to the corresponding portion of the resulting string. The concatenation of strings is accomplished by overriding `__add__()`.
###Code
class ostr(ostr):
def __add__(self, other):
if isinstance(other, ostr):
return self.create(str.__add__(self, other),
(self.origin + other.origin))
else:
return self.create(str.__add__(self, other),
(self.origin + [self.UNKNOWN_ORIGIN for i in other]))
###Output
_____no_output_____
###Markdown
Testing concatenations between two `ostr` instances:
###Code
thello = ostr("hello")
tworld = ostr("world", origin=6)
thw = thello + tworld
assert thw.origin == [0, 1, 2, 3, 4, 6, 7, 8, 9, 10]
###Output
_____no_output_____
###Markdown
What if a `ostr` is concatenated with a `str`?
###Code
space = " "
th_w = thello + space + tworld
assert th_w.origin == [
0,
1,
2,
3,
4,
ostr.UNKNOWN_ORIGIN,
ostr.UNKNOWN_ORIGIN,
6,
7,
8,
9,
10]
###Output
_____no_output_____
###Markdown
One wrinkle here is that when adding a `ostr` and a `str`, the user may place the `str` first, in which case, the `__add__()` method will be called on the `str` instance. Not on the `ostr` instance. However, Python provides a solution. If one defines `__radd__()` on the `ostr` instance, that method will be called rather than `str.__add__()`
###Code
class ostr(ostr):
def __radd__(self, other):
origin = other.origin if isinstance(other, ostr) else [
self.UNKNOWN_ORIGIN for i in other]
return self.create(str.__add__(other, self), (origin + self.origin))
###Output
_____no_output_____
###Markdown
We test it out:
###Code
shello = "hello"
tworld = ostr("world")
thw = shello + tworld
assert thw.origin == [ostr.UNKNOWN_ORIGIN] * len(shello) + [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
These methods: `slicing` and `concatenation` is sufficient to implement other string methods that result in a string, and does not change the character underneath (i.e no case change). Hence, we look at a helper method next. Extract Origin StringGiven a specific input index, the method `x()` extracts the corresponding origined portion from a `ostr`. As a convenience it supports `slices` along with `ints`.
###Code
class ostr(ostr):
class TaintException(Exception):
pass
def x(self, i=0):
if not self.origin:
raise origin.TaintException('Invalid request idx')
if isinstance(i, int):
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j == i]]
elif isinstance(i, slice):
r = range(i.start or 0, i.stop or len(self), i.step or 1)
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j in r]]
thw = ostr('hello world', origin=100)
assert thw.x(101) == ['e']
assert thw.x(slice(101, 105)) == ['e', 'l', 'l', 'o']
###Output
_____no_output_____
###Markdown
Replace The `replace()` method replaces a portion of the string with another.
###Code
class ostr(ostr):
def replace(self, a, b, n=None):
old_origin = self.origin
b_origin = b.origin if isinstance(
b, ostr) else [self.UNKNOWN_ORIGIN] * len(b)
mystr = str(self)
i = 0
while True:
if n and i >= n:
break
idx = mystr.find(a)
if idx == -1:
break
last = idx + len(a)
mystr = mystr.replace(a, b, 1)
partA, partB = old_origin[0:idx], old_origin[last:]
old_origin = partA + b_origin + partB
i += 1
return self.create(mystr, old_origin)
my_str = ostr("aa cde aa")
res = my_str.replace('aa', 'bb')
assert res, res.origin == ('bb', 'cde', 'bb',
[self.UNKNOWN_ORIGIN, self.UNKNOWN_ORIGIN,
2, 3, 4, 5, 6,
self.UNKNOWN_ORIGIN, self.UNKNOWN_ORIGIN])
my_str = ostr("aa cde aa")
res = my_str.replace('aa', ostr('bb', origin=100))
assert (
res, res.origin) == (
('bb cde bb'), [
100, 101, 2, 3, 4, 5, 6, 100, 101])
###Output
_____no_output_____
###Markdown
Split We essentially have to re-implement split operations, and split by space is slightly different from other splits.
###Code
class ostr(ostr):
def _split_helper(self, sep, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = len(sep)
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
first_idx = last_idx + sep_len
return result_list
def _split_space(self, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = 0
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
v = str(self[last_idx:])
sep_len = len(v) - len(v.lstrip(' '))
first_idx = last_idx + sep_len
return result_list
def rsplit(self, sep=None, maxsplit=-1):
splitted = super().rsplit(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
def split(self, sep=None, maxsplit=-1):
splitted = super().split(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
my_str = ostr('ab cdef ghij kl')
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([0, 1], [3, 4, 5, 6], [8, 9, 10, 11], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 15)))
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert(ab.origin, cdef.origin, kl.origin) == ([0, 1], [3, 4, 5, 6], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=100, taint='HIGH')
ab, cdef, ghij, kl = my_str.rsplit()
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([100, 101], [105, 106, 107, 108], [110, 111, 112, 113],
[118, 119])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 20)), taint='HIGH')
ab, cdef, ghij, kl = my_str.split()
assert (ab.origin, cdef.origin, kl.origin) == ([0, 1], [5, 6, 7, 8], [18, 19])
assert ab.taint == 'HIGH'
###Output
_____no_output_____
###Markdown
Strip
###Code
class ostr(ostr):
def strip(self, cl=None):
return self.lstrip(cl).rstrip(cl)
def lstrip(self, cl=None):
res = super().lstrip(cl)
i = self.find(res)
return self[i:]
def rstrip(self, cl=None):
res = super().rstrip(cl)
return self[0:len(res)]
my_str1 = ostr(" abc ")
v = my_str1.strip()
assert v, v.origin == ('abc', [2, 3, 4])
my_str1 = ostr(" abc ")
v = my_str1.lstrip()
assert (v, v.origin) == ('abc ', [2, 3, 4, 5, 6])
my_str1 = ostr(" abc ")
v = my_str1.rstrip()
assert (v, v.origin) == (' abc', [0, 1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
Expand Tabs
###Code
class ostr(ostr):
def expandtabs(self, n=8):
parts = self.split('\t')
res = super().expandtabs(n)
all_parts = []
for i, p in enumerate(parts):
all_parts.extend(p.origin)
if i < len(parts) - 1:
l = len(all_parts) % n
all_parts.extend([p.origin[-1]] * l)
return self.create(res, all_parts)
my_str = str("ab\tcd")
my_ostr = ostr("ab\tcd")
v1 = my_str.expandtabs(4)
v2 = my_ostr.expandtabs(4)
assert str(v1) == str(v2)
assert (len(v1), repr(v2), v2.origin) == (6, "'ab cd'", [0, 1, 1, 1, 3, 4])
class ostr(ostr):
def join(self, iterable):
mystr = ''
myorigin = []
sep_origin = self.origin
lst = list(iterable)
for i, s in enumerate(lst):
sorigin = s.origin if isinstance(s, ostr) else [
self.UNKNOWN_ORIGIN] * len(s)
myorigin.extend(sorigin)
mystr += str(s)
if i < len(lst) - 1:
myorigin.extend(sep_origin)
mystr += str(self)
res = super().join(iterable)
assert len(res) == len(mystr)
return self.create(res, myorigin)
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104])
v4 = ostr('').join([v2, v3, v1])
assert (
v4, v4.origin) == (
'cdefab', [
103, 104, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 100, 101])
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104])
v4 = ostr(',').join([v2, v3, v1])
assert (v4, v4.origin) == ('cd,ef,ab',
[103, 104, 0, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 0, 100, 101])
###Output
_____no_output_____
###Markdown
Partitions
###Code
class ostr(ostr):
def partition(self, sep):
partA, sep, partB = super().partition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
def rpartition(self, sep):
partA, sep, partB = super().rpartition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
###Output
_____no_output_____
###Markdown
Justify
###Code
class ostr(ostr):
def ljust(self, width, fillchar=' '):
res = super().ljust(width, fillchar)
initial = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, [t] * initial + self.origin)
class ostr(ostr):
def rjust(self, width, fillchar=' '):
res = super().rjust(width, fillchar)
final = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, self.origin + [t] * final)
###Output
_____no_output_____
###Markdown
mod
###Code
class ostr(ostr):
def __mod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
s_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = self.find('%s')
assert i >= 0
res = super().__mod__(s)
r_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
class ostr(ostr):
def __rmod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
r_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = s.find('%s')
assert i >= 0
res = super().__rmod__(s)
s_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
a = ostr('hello %s world', origin=100)
a
(a % 'good').origin
b = 'hello %s world'
c = ostr('bad', origin=10)
(b % c).origin
###Output
_____no_output_____
###Markdown
String methods that do not change origin
###Code
class ostr(ostr):
def swapcase(self):
return self.create(str(self).swapcase(), self.origin)
def upper(self):
return self.create(str(self).upper(), self.origin)
def lower(self):
return self.create(str(self).lower(), self.origin)
def capitalize(self):
return self.create(str(self).capitalize(), self.origin)
def title(self):
return self.create(str(self).title(), self.origin)
a = ostr('aa', origin=100).upper()
a, a.origin
###Output
_____no_output_____
###Markdown
General wrappers These are not strictly needed for operation, but can be useful for tracing.
###Code
def make_str_wrapper(fun):
def proxy(*args, **kwargs):
res = fun(*args, **kwargs)
return res
return proxy
import inspect
import types
def informationflow_init_2():
ostr_members = [name for name, fn in inspect.getmembers(ostr, callable)
if isinstance(fn, types.FunctionType) and fn.__qualname__.startswith('ostr')]
for name, fn in inspect.getmembers(str, callable):
if name not in set(['__class__', '__new__', '__str__', '__init__',
'__repr__', '__getattribute__']) | set(ostr_members):
setattr(ostr, name, make_str_wrapper(fn))
informationflow_init_2()
INITIALIZER_LIST.append(informationflow_init_2)
###Output
_____no_output_____
###Markdown
Methods yet to be translated These methods generate strings from other strings. However, we do not have the right implementations for any of these. Hence these are marked as dangerous until we can generate the right translations.
###Code
def make_str_abort_wrapper(fun):
def proxy(*args, **kwargs):
raise ostr.TaintException(
'%s Not implemented in `ostr`' %
fun.__name__)
return proxy
def informationflow_init_3():
for name, fn in inspect.getmembers(str, callable):
# Omitted 'splitlines' as this is needed for formatting output in
# IPython/Jupyter
if name in ['__format__', 'format_map', 'format',
'__mul__', '__rmul__', 'center', 'zfill', 'decode', 'encode']:
setattr(ostr, name, make_str_abort_wrapper(fn))
informationflow_init_3()
INITIALIZER_LIST.append(informationflow_init_3)
###Output
_____no_output_____
###Markdown
While generating proxy wrappers for string operations can handle most common cases of transmission of information flow, some of the operations involving strings can not be overridden. For example, consider the following. Checking OriginsWith all this implemented, we now have full-fledged `ostr` strings where we can easily check the origin of each and every character. To check whether a string originates from another string, we can convert the origin to a set and resort to standard set operations:
###Code
s = ostr("hello", origin=100)
s[1]
s[1].origin
set(s[1].origin) <= set(s.origin)
t = ostr("world", origin=200)
set(s.origin) <= set(t.origin)
u = s + t + "!"
u.origin
ostr.UNKNOWN_ORIGIN in u.origin
###Output
_____no_output_____
###Markdown
Privacy Leaks RevisitedLet us apply it to see whether we can come up with a satisfactory solution for checking the `heartbeat()` function against information leakage.
###Code
SECRET_ORIGIN = 1000
###Output
_____no_output_____
###Markdown
We define a "secret" that must not leak out:
###Code
secret = ostr('<again, some super-secret input>', origin=SECRET_ORIGIN)
###Output
_____no_output_____
###Markdown
Each and every character in `secret` has an origin starting with `SECRET_ORIGIN`:
###Code
print(secret.origin)
###Output
[1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031]
###Markdown
If we now invoke `heartbeat()` with a given string, the origin of the reply should all be `UNKNOWN_ORIGIN` (from the input), and none of the characters should have a `SECRET_ORIGIN`.
###Code
s = heartbeat('hello', 5, memory=secret)
s
print(s.origin)
###Output
[-1, -1, -1, -1, -1]
###Markdown
We can verify that the secret did not leak out by formulating appropriate assertions:
###Code
assert s.origin == [ostr.UNKNOWN_ORIGIN] * len(s)
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in s.origin)
assert not any(origin >= SECRET_ORIGIN for origin in s.origin)
###Output
_____no_output_____
###Markdown
All assertions pass, again confirming that no secret leaked out. Let us now go and exploit `heartbeat()` to reveal its secrets. As `heartbeat()` is unchanged, it is as vulnerable as it was:
###Code
s = heartbeat('hello', 32, memory=secret)
s
###Output
_____no_output_____
###Markdown
Now, however, the reply _does_ contain secret information:
###Code
print(s.origin)
with ExpectError():
assert s.origin == [ostr.UNKNOWN_ORIGIN] * len(s)
with ExpectError():
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in s.origin)
with ExpectError():
assert not any(origin >= SECRET_ORIGIN for origin in s.origin)
###Output
Traceback (most recent call last):
File "<ipython-input-212-9630f3080c59>", line 2, in <module>
assert not any(origin >= SECRET_ORIGIN for origin in s.origin)
AssertionError (expected)
###Markdown
We can now integrate these assertions into the `heartbeat()` function, causing it to fail before leaking information. Additionally (or alternatively?), we can also rewrite our output functions not to give out any secret information. We will leave these two exercises for the reader. Taint-Directed FuzzingThe previous _Taint Aware Fuzzing_ was a bit unsatisfactory in that we could not focus on the specific parts of the grammar that led to dangerous operations. We fix that with _taint directed fuzzing_ using `TrackingDB`.The idea here is to track the origins of each character that reaches `eval`. Then, track it back to the grammar nodes that generated it, and increase the probability of using those nodes again. TrackingDBThe `TrackingDB` is similar to `TaintedDB`. The difference is that, if we find that the execution has reached the `my_eval`, we simply raise the `Tainted`.
###Code
class TrackingDB(TaintedDB):
def my_eval(self, statement, g, l):
if statement.origin:
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
Next, we need a specially crafted fuzzer that preserves the taints. TaintedGrammarFuzzerWe define a `TaintedGrammarFuzzer` class that ensures that the taints propagate to the derivation tree. This is similar to the `GrammarFuzzer` from the [chapter on grammar fuzzers](GrammarFuzzer.ipynb) except that the origins and taints are preserved.
###Code
import random
from Grammars import START_SYMBOL
from GrammarFuzzer import GrammarFuzzer
from Parser import canonical
class TaintedGrammarFuzzer(GrammarFuzzer):
def __init__(self,
grammar,
start_symbol=START_SYMBOL,
expansion_switch=1,
log=False):
self.tainted_start_symbol = ostr(
start_symbol, origin=[1] * len(start_symbol))
self.expansion_switch = expansion_switch
self.log = log
self.grammar = grammar
self.c_grammar = canonical(grammar)
self.init_tainted_grammar()
def expansion_cost(self, expansion, seen=set()):
symbols = [e for e in expansion if e in self.c_grammar]
if len(symbols) == 0:
return 1
if any(s in seen for s in symbols):
return float('inf')
return sum(self.symbol_cost(s, seen) for s in symbols) + 1
def fuzz_tree(self):
tree = (self.tainted_start_symbol, [])
nt_leaves = [tree]
expansion_trials = 0
while nt_leaves:
idx = random.randint(0, len(nt_leaves) - 1)
key, children = nt_leaves[idx]
expansions = self.ct_grammar[key]
if expansion_trials < self.expansion_switch:
expansion = random.choice(expansions)
else:
costs = [self.expansion_cost(e) for e in expansions]
m = min(costs)
all_min = [i for i, c in enumerate(costs) if c == m]
expansion = expansions[random.choice(all_min)]
new_leaves = [(token, []) for token in expansion]
new_nt_leaves = [e for e in new_leaves if e[0] in self.ct_grammar]
children[:] = new_leaves
nt_leaves[idx:idx + 1] = new_nt_leaves
if self.log:
print("%-40s" % (key + " -> " + str(expansion)))
expansion_trials += 1
return tree
def fuzz(self):
self.derivation_tree = self.fuzz_tree()
return self.tree_to_string(self.derivation_tree)
###Output
_____no_output_____
###Markdown
We use a specially prepared tainted grammar for fuzzing. We mark each individual definition, each individual rule, and each individual token with a separate origin (we chose a token boundary of 10 here, after inspecting the grammar). This allows us to track exactly which parts of the grammar were involved in the operations we are interested in.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def init_tainted_grammar(self):
key_increment, alt_increment, token_increment = 1000, 100, 10
key_origin = key_increment
self.ct_grammar = {}
for key, val in self.c_grammar.items():
key_origin += key_increment
os = []
for v in val:
ts = []
key_origin += alt_increment
for t in v:
nt = ostr(t, origin=key_origin)
key_origin += token_increment
ts.append(nt)
os.append(ts)
self.ct_grammar[key] = os
# a use tracking grammar
self.ctp_grammar = {}
for key, val in self.ct_grammar.items():
self.ctp_grammar[key] = [(v, dict(use=0)) for v in val]
###Output
_____no_output_____
###Markdown
As before, we initialize the `TrackingDB`
###Code
trdb = TrackingDB(db.db)
###Output
_____no_output_____
###Markdown
Finally, we need to ensure that the taints are preserved, when the tree is converted back to a string. For this, we define the `tainted_tree_to_string()`
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def tree_to_string(self, tree):
symbol, children, *_ = tree
e = ostr('')
if children:
return e.join([self.tree_to_string(c) for c in children])
else:
return e if symbol in self.c_grammar else symbol
###Output
_____no_output_____
###Markdown
We define `update_grammar()` that accepts a set of origins that reached the dangerous operations and the derivation tree of the original string used for fuzzing to update the enhanced grammar.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def update_grammar(self, origin, dtree):
def update_tree(dtree, origin):
key, children = dtree
if children:
updated_children = [update_tree(c, origin) for c in children]
corigin = set.union(
*[o for (key, children, o) in updated_children])
corigin = corigin.union(set(key.origin))
return (key, children, corigin)
else:
my_origin = set(key.origin).intersection(origin)
return (key, [], my_origin)
key, children, oset = update_tree(dtree, set(origin))
for key, alts in self.ctp_grammar.items():
for alt, o in alts:
alt_origins = set([i for token in alt for i in token.origin])
if alt_origins.intersection(oset):
o['use'] += 1
###Output
_____no_output_____
###Markdown
With these, we are now ready to fuzz.
###Code
def tree_type(tree):
key, children = tree
return (type(key), key, [tree_type(c) for c in children])
tgf = TaintedGrammarFuzzer(INVENTORY_GRAMMAR_F)
x = None
for _ in range(10):
qtree = tgf.fuzz_tree()
query = tgf.tree_to_string(qtree)
assert isinstance(query, ostr)
try:
print(repr(query))
res = trdb.sql(query)
print(repr(res))
except SQLException as e:
print(e)
except Tainted as e:
print(e)
origin = e.args[0].origin
tgf.update_grammar(origin, qtree)
except:
traceback.print_exc()
break
print()
###Output
'select (g!=(9)!=((:)==2==9)!=J)==-7 from inventory'
Tainted[((g!=(9)!=((:)==2==9)!=J)==-7)]
'delete from inventory where ((c)==T)!=5==(8!=Y)!=-5'
Tainted[((c)==T)!=5==(8!=Y)!=-5]
'select (((w==(((X!=------8)))))) from inventory'
Tainted[((((w==(((X!=------8)))))))]
'delete from inventory where ((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))'
Tainted[((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))]
'delete from inventory where ((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))'
Tainted[((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))]
'delete from inventory where o!=2==---5==3!=t'
Tainted[o!=2==---5==3!=t]
'select (2) from inventory'
Tainted[((2))]
'select _ from inventory'
Tainted[(_)]
'select L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)) from inventory'
Tainted[(L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)))]
'delete from inventory where _==(7==(9)!=(---5)==1)==-8'
Tainted[_==(7==(9)!=(---5)==1)==-8]
###Markdown
We can now inspect our enhanced grammar to see how many times each rule was used.
###Code
tgf.ctp_grammar
###Output
_____no_output_____
###Markdown
From here, the idea is to focus on the rules that reached dangerous operations more often, and increase the probability of the values of that kind. The Limits of Taint TrackingWhile our framework can detect information leakage, it is by no means perfect. There are several ways in which taints can get lost and information thus may still leak out. ConversionsWe only track taints and origins through _strings_ and _characters_. If we convert these to numbers (or other data), the information is lost. As an example, consider this function, converting individual characters to numbers and back:
###Code
def strip_all_info(s):
t = ""
for c in s:
t += chr(ord(c))
return t
thello = ostr("Secret")
thello
thello.origin
###Output
_____no_output_____
###Markdown
The taints and origins will not propagate through the number conversion:
###Code
thello_stripped = strip_all_info(thello)
thello_stripped
with ExpectError():
thello_stripped.origin
###Output
Traceback (most recent call last):
File "<ipython-input-230-56d5157cf575>", line 2, in <module>
thello_stripped.origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
This issue could be addressed by extending numbers with taints and origins, just as we did for strings. At some point, however, this will still break down, because as soon as an internal C function in the Python library is reached, the taint will not propagate into and across the C function. (Unless one starts implementing dynamic taints for these, that is.) Internal C libraries As we mentioned before, calls to _internal_ C libraries do not propagate taints. For example, while the following preserves the taints,
###Code
hello = ostr('hello', origin=100)
world = ostr('world', origin=200)
(hello + ' ' + world).origin
###Output
_____no_output_____
###Markdown
a call to a `join` that should be equivalent will fail.
###Code
with ExpectError():
''.join([hello, ' ', world]).origin
###Output
Traceback (most recent call last):
File "<ipython-input-232-ad148b54cc0b>", line 2, in <module>
''.join([hello, ' ', world]).origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
Implicit Information FlowEven if one could taint all data in a program, there still would be means to break information flow – notably by turning explicit flow into _implicit_ flow, or data flow into _control flow_. Here is an example:
###Code
def strip_all_info_again(s):
t = ""
for c in s:
if c == 'a':
t += 'a'
elif c == 'b':
t += 'b'
elif c == 'c':
t += 'c'
...
###Output
_____no_output_____
###Markdown
With such a function, there is no explicit data flow between the characters in `s` and the characters in `t`; yet, the strings would be identical. This problem frequently occurs in programs that process and manipulate external input. Enforcing TaintingBoth, conversions and implicit information flow are one of several possibilities how taint and origin information get lost. To address the problem, the best solution is to _always assume the worst from untainted strings_:* As it comes to trust, an untainted string should be treated as _possibly untrusted_, and hence not relied upon unless sanitized.* As it comes to privacy, an untainted string should be treated as _possibly secret_, and hence not leaked out.As a consequence, your program should always have two kinds of taints: one for explicitly trusted (or secret) and one for explicitly untrusted (or non-secret). If a taint gets lost along the way, you will may have to restore it from its sources – not unlike the string methods discussed above. The benefit is a trusted application, in which each and every information flow can be checked at runtime, with violations quickly discovered through automated tests. Synopsis This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. For tracking information on security properties, use `tstr` as follows:
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any operation from `thello` that results in a string fragment would include the correct taint. For example:
###Code
thello[1:2].taint
###Output
_____no_output_____
###Markdown
For tracking the originating indexes from the input string, use `ostr` as follows:
###Code
ohw = ostr("hello\tworld", origin=100)
###Output
_____no_output_____
###Markdown
The originating indexes can be recovered as follows:
###Code
(ohw[0:4] +"-"+ ohw[6:]).origin
###Output
_____no_output_____
###Markdown
Lessons Learned* String-based and character-based taints allow to dynamically track the information flow from input to the internals of a system and back to the output.* Checking taints allows to discover untrusted inputs and information leakage at runtime.* Data conversions and implicit data flow may strip taint information; the resulting untainted strings should be treated as having the worst possible taint.* Taints can be used in conjunction with fuzzing to provide a more robust indication of incorrect behavior than to simply rely on program crashes. Next StepsAn even better alternative to our taint-directed fuzzing is to make use of _symbolic_ techniques that take the semantics of the program under test into account. The chapter on [flow fuzzing](FlowFuzzer.ipynb) introduces these symbolic techniques for the purpose of exploring information flows; the subsequent chapter on [symbolic fuzzing](SymbolicFuzzer.ipynb) then shows how to make full-fledged use of symbolic execution for covering code. Similarly, [search based fuzzing](SearchBasedFuzzer.ipynb) can often provide a cheaper exploration strategy. BackgroundTaint analysis on Python using a library approach as we implemented in this chapter was discussed by Conti et al. \cite{Conti2010}. Exercises Exercise 1: Tainted NumbersIntroduce a class `tint` (for tainted integer) that, like `tstr`, has a taint attribute that gets passed on from `tint` to `tint`. Part 1: CreationImplement the `tint` class such that taints are set:```pythonx = tint(42, taint='SECRET')assert x.taint == 'SECRET'``` **Solution.** This is pretty straightforward, as we can apply the same scheme as for `tstr`:
###Code
class tint(int):
def __new__(cls, value, *args, **kw):
return int.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
x = tint(42, taint='SECRET')
assert x.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 2: Arithmetic expressionsEnsure that taints get passed along arithmetic expressions; support addition, subtraction, multiplication, and division operators.```pythony = x + 1assert y.taint == 'SECRET'``` **Solution.** As with `tstr`, we implement a `create()` method and a convenience function to quickly define all arithmetic operations:
###Code
class tint(tint):
def create(self, n):
# print("New tint from", n)
return tint(n, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_int_wrapper()` function creates a wrapper around an existing `int` method which attaches the taint to the result of the method:
###Code
def make_int_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
# print(fun, args, kwargs, "=", repr(res))
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all arithmetic operators:
###Code
for name in ['__add__', '__radd__', '__mul__', '__rmul__', '__sub__',
'__floordiv__', '__truediv__']:
fun = getattr(int, name)
setattr(tint, name, make_int_wrapper(fun))
x = tint(42, taint='SECRET')
y = x + 1
y.taint
###Output
_____no_output_____
###Markdown
Part 3: Passing taints from integers to stringsConverting a tainted integer into a string (using `repr()`) should yield a tainted string:```pythons = repr(x)assert s.taint == 'SECRET'``` **Solution.** We define the string conversion functions such that they return a tainted string (`tstr`):
###Code
class tint(tint):
def __repr__(self):
s = int.__repr__(self)
return tstr(s, taint=self.taint)
class tint(tint):
def __str__(self):
return tstr(int.__str__(self), taint=self.taint)
x = tint(42, taint='SECRET')
s = repr(x)
assert s.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 4: Passing taints from strings to integersConverting a tainted object (with a `taint` attribute) to an integer should pass that taint:```pythonpassword = tstr('1234', taint='NOT_EXACTLY_SECRET')x = tint(password)assert x == 1234assert x.taint == 'NOT_EXACTLY_SECRET'``` **Solution.** This can be done by having the `__init__()` constructor check for a `taint` attibute:
###Code
class tint(tint):
def __init__(self, value, taint=None, **kwargs):
if taint is not None:
self.taint = taint
else:
self.taint = getattr(value, 'taint', None)
password = tstr('1234', taint='NOT_EXACTLY_SECRET')
x = tint(password)
assert x == 1234
assert x.taint == 'NOT_EXACTLY_SECRET'
###Output
_____no_output_____
###Markdown
Tracking Information FlowWe have explored how one could generate better inputs that can penetrate deeper into the program in question. While doing so, we have relied on program crashes to tell us that we have succeeded in finding problems in the program. However, that is rather simplistic. What if the behavior of the program is simply incorrect, but does not lead to a crash? Can one do better?In this chapter, we explore in depth how to track information flows in Python, and how these flows can be used to determine whether a program behaved as expected.
###Code
from bookutils import YouTubeVideo
YouTubeVideo('WZi0dTvJ2Ug')
###Output
_____no_output_____
###Markdown
**Prerequisites*** You should have read the [chapter on coverage](Coverage.ipynb).* You should have read the [chapter on probabilistic fuzzing](ProbabilisticGrammarFuzzer.ipynb). We first set up our infrastructure so that we can make use of previously defined functions.
###Code
import bookutils
from typing import List, Any, Optional, Union
###Output
_____no_output_____
###Markdown
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from fuzzingbook.InformationFlow import ```and then make use of the following features.This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:```python>>> thello = tstr('hello', taint='LOW')```A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:```python>>> thello[:4]'hell'```However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:```python>>> thello.taint'LOW'```The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:```python>>> thello[1:2].taint type: ignore'LOW'````tstr` objects duplicate most `str` methods, as indicated in the class diagram: Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:```python>>> secret = ostr("joshua1234", origin=100, taint='SECRET')```The `origin` attribute of an `ostr` provides access to a list of indexes:```python>>> secret.origin[100, 101, 102, 103, 104, 105, 106, 107, 108, 109]>>> secret.taint'SECRET'````ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:```python>>> secret_substr = (secret[0:4] + "-" + secret[6:])>>> secret_substr.taint'SECRET'>>> secret_substr.origin[100, 101, 102, 103, -1, 106, 107, 108, 109]````ostr` objects duplicate most `str` methods, as indicated in the class diagram: A Vulnerable DatabaseSay we want to implement an *in-memory database* service in Python. Here is a rather flimsy attempt. We use the following dataset.
###Code
INVENTORY = """\
1997,van,Ford,E350
2000,car,Mercury,Cougar
1999,car,Chevy,Venture\
"""
VEHICLES = INVENTORY.split('\n')
###Output
_____no_output_____
###Markdown
Our DB is a Python class that parses its arguments and throws `SQLException` which is defined below.
###Code
class SQLException(Exception):
pass
###Output
_____no_output_____
###Markdown
The database is simply a Python `dict` that is exposed only through SQL queries.
###Code
class DB:
def __init__(self, db={}):
self.db = dict(db)
###Output
_____no_output_____
###Markdown
Representing TablesThe database contains tables, which are created by a method call `create_table()`. Each table data structure is a pair of values. The first one is the meta data containing column names and types. The second value is a list of values in the table.
###Code
class DB(DB):
def create_table(self, table, defs):
self.db[table] = (defs, [])
###Output
_____no_output_____
###Markdown
The table can be retrieved using the name using the `table()` method call.
###Code
class DB(DB):
def table(self, t_name):
if t_name in self.db:
return self.db[t_name]
raise SQLException('Table (%s) was not found' % repr(t_name))
###Output
_____no_output_____
###Markdown
Here is an example of how to use both. We fill a table `inventory` with four columns: `year`, `kind`, `company`, and `model`. Initially, our table is empty.
###Code
def sample_db():
db = DB()
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
return db
###Output
_____no_output_____
###Markdown
Using `table()`, we can retrieve the table definition as well as its contents.
###Code
db = sample_db()
db.table('inventory')
###Output
_____no_output_____
###Markdown
We also define `column()` for retrieving the column definition from a table declaration.
###Code
class DB(DB):
def column(self, table_decl, c_name):
if c_name in table_decl:
return table_decl[c_name]
raise SQLException('Column (%s) was not found' % repr(c_name))
db = sample_db()
decl, rows = db.table('inventory')
db.column(decl, 'year')
###Output
_____no_output_____
###Markdown
Executing SQL StatementsThe `sql()` method of `DB` executes SQL statements. It inspects its arguments, and dispatches the query based on the kind of SQL statement to be executed.
###Code
class DB(DB):
def do_select(self, query):
...
def do_update(self, query):
...
def do_insert(self, query):
...
def do_delete(self, query):
...
def sql(self, query):
methods = [('select ', self.do_select),
('update ', self.do_update),
('insert into ', self.do_insert),
('delete from', self.do_delete)]
for key, method in methods:
if query.startswith(key):
return method(query[len(key):])
raise SQLException('Unknown SQL (%s)' % query)
###Output
_____no_output_____
###Markdown
Here's an example of how to use the `DB` class:
###Code
some_db = DB()
some_db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
However, at this point, the individual methods for handling SQL statements are not yet defined. Let us do this in the next steps. Excursion: Implementing SQL Statements Selecting DataThe `do_select()` method handles SQL `select` statements to retrieve data from a table.
###Code
class DB(DB):
def do_select(self, query):
FROM, WHERE = ' from ', ' where '
table_start = query.find(FROM)
if table_start < 0:
raise SQLException('no table specified')
where_start = query.find(WHERE)
select = query[:table_start]
if where_start >= 0:
t_name = query[table_start + len(FROM):where_start]
where = query[where_start + len(WHERE):]
else:
t_name = query[table_start + len(FROM):]
where = ''
_, table = self.table(t_name)
if where:
selected = self.expression_clause(table, "(%s)" % where)
selected_rows = [hm for i, data, hm in selected if data]
else:
selected_rows = table
rows = self.expression_clause(selected_rows, "(%s)" % select)
return [data for i, data, hm in rows]
###Output
_____no_output_____
###Markdown
The `expression_clause()` method is used for two purposes:1. In the form `select` $x$, $y$, $z$ `from` $t$, it _evaluates_ (and returns) the expressions $x$, $y$, $z$ in the contexts of the selected rows.2. If a clause `where` $p$ is given, it also evaluates $p$ in the context of the rows and includes the rows in the selection only if $p$ holds.To evaluate expressions like $x$, $y$, $z$ or $p$, the method `expression_clause()` makes use of the Python `eval()` evaluation function.
###Code
class DB(DB):
def expression_clause(self, table, statement):
selected = []
for i, hm in enumerate(table):
selected.append((i, self.my_eval(statement, {}, hm), hm))
return selected
###Output
_____no_output_____
###Markdown
If `eval()` fails for whatever reason, we raise an exception:
###Code
class DB(DB):
def my_eval(self, statement, g, l):
try:
return eval(statement, g, l)
except Exception:
raise SQLException('Invalid WHERE (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
**Note:** Using `eval()` here introduces some important security issues, which we will discuss later in this chapter. Here's how we can use `sql()` to issue a query. Note that the table is yet empty.
###Code
db = sample_db()
db.sql('select year from inventory')
db = sample_db()
db.sql('select year from inventory where year == 2018')
###Output
_____no_output_____
###Markdown
Inserting DataThe `do_insert()` method handles SQL `insert` statements.
###Code
class DB(DB):
def do_insert(self, query):
VALUES = ' values '
table_end = query.find('(')
t_name = query[:table_end].strip()
names_end = query.find(')')
decls, table = self.table(t_name)
names = [i.strip() for i in query[table_end + 1:names_end].split(',')]
# verify columns exist
for k in names:
self.column(decls, k)
values_start = query.find(VALUES)
if values_start < 0:
raise SQLException('Invalid INSERT (%s)' % repr(query))
values = [
i.strip() for i in query[values_start + len(VALUES) + 1:-1].split(',')
]
if len(names) != len(values):
raise SQLException(
'names(%s) != values(%s)' % (repr(names), repr(values)))
# dict lookups happen in C code, so we can't use that
kvs = {}
for k,v in zip(names, values):
for key,kval in decls.items():
if k == key:
kvs[key] = self.convert(kval, v)
table.append(kvs)
###Output
_____no_output_____
###Markdown
In SQL, a column can come in any supported data type. To ensure it is stored using the type originally declared, we need the ability to convert the values to specific types which is provided by `convert()`.
###Code
import ast
class DB(DB):
def convert(self, cast, value):
try:
return cast(ast.literal_eval(value))
except:
raise SQLException('Invalid Conversion %s(%s)' % (cast, value))
###Output
_____no_output_____
###Markdown
Here is an example of how to use the SQL `insert` command:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.table('inventory')
###Output
_____no_output_____
###Markdown
With the database filled, we can also run more complex queries:
###Code
db.sql('select year + 1, kind from inventory')
db.sql('select year, kind from inventory where year == 1997')
###Output
_____no_output_____
###Markdown
Updating DataSimilarly, `do_update()` handles SQL `update` statements.
###Code
class DB(DB):
def do_update(self, query):
SET, WHERE = ' set ', ' where '
table_end = query.find(SET)
if table_end < 0:
raise SQLException('Invalid UPDATE (%s)' % repr(query))
set_end = table_end + 5
t_name = query[:table_end]
decls, table = self.table(t_name)
names_end = query.find(WHERE)
if names_end >= 0:
names = query[set_end:names_end]
where = query[names_end + len(WHERE):]
else:
names = query[set_end:]
where = ''
sets = [[i.strip() for i in name.split('=')]
for name in names.split(',')]
# verify columns exist
for k, v in sets:
self.column(decls, k)
if where:
selected = self.expression_clause(table, "(%s)" % where)
updated = [hm for i, d, hm in selected if d]
else:
updated = table
for hm in updated:
for k, v in sets:
# we can not do dict lookups because it is implemented in C.
for key, kval in decls.items():
if key == k:
hm[key] = self.convert(kval, v)
return "%d records were updated" % len(updated)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can update things:
###Code
db.sql('update inventory set year = 1998 where year == 1997')
db.sql('select year from inventory')
db.table('inventory')
###Output
_____no_output_____
###Markdown
Deleting DataFinally, SQL `delete` statements are handled by `do_delete()`.
###Code
class DB(DB):
def do_delete(self, query):
WHERE = ' where '
table_end = query.find(WHERE)
if table_end < 0:
raise SQLException('Invalid DELETE (%s)' % query)
t_name = query[:table_end].strip()
_, table = self.table(t_name)
where = query[table_end + len(WHERE):]
selected = self.expression_clause(table, "%s" % where)
deleted = [i for i, d, hm in selected if d]
for i in sorted(deleted, reverse=True):
del table[i]
return "%d records were deleted" % len(deleted)
###Output
_____no_output_____
###Markdown
Here is an example. Let us first fill the database again with values:
###Code
db = sample_db()
db.sql('insert into inventory (year, kind, company, model) values (1997, "van", "Ford", "E350")')
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
Now we can delete data:
###Code
db.sql('delete from inventory where company == "Ford"')
###Output
_____no_output_____
###Markdown
Our database is now empty:
###Code
db.sql('select year from inventory')
###Output
_____no_output_____
###Markdown
End of Excursion Here is how our database can be used.
###Code
db = DB()
###Output
_____no_output_____
###Markdown
We first create a table in our database with the correct data types.
###Code
inventory_def = {'year': int, 'kind': str, 'company': str, 'model': str}
db.create_table('inventory', inventory_def)
###Output
_____no_output_____
###Markdown
Here is a simple convenience function to update the table using our dataset.
###Code
def update_inventory(sqldb, vehicle):
inventory_def = sqldb.db['inventory'][0]
k, v = zip(*inventory_def.items())
val = [repr(cast(val)) for cast, val in zip(v, vehicle.split(','))]
sqldb.sql('insert into inventory (%s) values (%s)' % (','.join(k),
','.join(val)))
for V in VEHICLES:
update_inventory(db, V)
###Output
_____no_output_____
###Markdown
Our database now contains the same dataset as `VEHICLES` under `INVENTORY` table.
###Code
db.db
###Output
_____no_output_____
###Markdown
Here is a sample select statement.
###Code
db.sql('select year,kind from inventory')
db.sql("select company,model from inventory where kind == 'car'")
###Output
_____no_output_____
###Markdown
We can run updates on it.
###Code
db.sql("update inventory set year = 1998, company = 'Suzuki' where kind == 'van'")
db.db
###Output
_____no_output_____
###Markdown
It can even do mathematics on the fly!
###Code
db.sql('select int(year)+10 from inventory')
###Output
_____no_output_____
###Markdown
Adding a new row to our table.
###Code
db.sql("insert into inventory (year, kind, company, model) values (1, 'charriot', 'Rome', 'Quadriga')")
db.db
###Output
_____no_output_____
###Markdown
Which we then delete.
###Code
db.sql("delete from inventory where year < 1900")
###Output
_____no_output_____
###Markdown
Fuzzing SQLTo verify that everything is OK, let us fuzz. First we define our grammar. Excursion: Defining a SQL grammar
###Code
import string
from Grammars import START_SYMBOL, Grammar, Expansion, \
is_valid_grammar, extend_grammar
EXPR_GRAMMAR: Grammar = {
"<start>": ["<expr>"],
"<expr>": ["<bexpr>", "<aexpr>", "(<expr>)", "<term>"],
"<bexpr>": [
"<aexpr><lt><aexpr>",
"<aexpr><gt><aexpr>",
"<expr>==<expr>",
"<expr>!=<expr>",
],
"<aexpr>": [
"<aexpr>+<aexpr>", "<aexpr>-<aexpr>", "<aexpr>*<aexpr>",
"<aexpr>/<aexpr>", "<word>(<exprs>)", "<expr>"
],
"<exprs>": ["<expr>,<exprs>", "<expr>"],
"<lt>": ["<"],
"<gt>": [">"],
"<term>": ["<number>", "<word>"],
"<number>": ["<integer>.<integer>", "<integer>", "-<number>"],
"<integer>": ["<digit><integer>", "<digit>"],
"<word>": ["<word><letter>", "<word><digit>", "<letter>"],
"<digit>":
list(string.digits),
"<letter>":
list(string.ascii_letters + '_:.')
}
assert is_valid_grammar(EXPR_GRAMMAR)
PRINTABLE_CHARS: List[str] = [i for i in string.printable
if i not in "<>'\"\t\n\r\x0b\x0c\x00"] + ['<lt>', '<gt>']
INVENTORY_GRAMMAR = extend_grammar(EXPR_GRAMMAR,
{
'<start>': ['<query>'],
'<query>': [
'select <exprs> from <table>',
'select <exprs> from <table> where <bexpr>',
'insert into <table> (<names>) values (<literals>)',
'update <table> set <assignments> where <bexpr>',
'delete from <table> where <bexpr>',
],
'<table>': ['<word>'],
'<names>': ['<column>,<names>', '<column>'],
'<column>': ['<word>'],
'<literals>': ['<literal>', '<literal>,<literals>'],
'<literal>': ['<number>', "'<chars>'"],
'<assignments>': ['<kvp>,<assignments>', '<kvp>'],
'<kvp>': ['<column>=<value>'],
'<value>': ['<word>'],
'<chars>': ['<char>', '<char><chars>'],
'<char>': PRINTABLE_CHARS, # type: ignore
})
assert is_valid_grammar(INVENTORY_GRAMMAR)
###Output
_____no_output_____
###Markdown
As can be seen from the source of our database, the functions always check whether the table name is correct. Hence, we modify the grammar to choose our particular table so that it will have a better chance of reaching deeper. We will see in the later sections how this can be done automatically.
###Code
INVENTORY_GRAMMAR_F = extend_grammar(INVENTORY_GRAMMAR,
{'<table>': ['inventory']})
###Output
_____no_output_____
###Markdown
End of Excursion
###Code
from GrammarFuzzer import GrammarFuzzer
gf = GrammarFuzzer(INVENTORY_GRAMMAR_F)
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = db.sql(query)
print(repr(res))
except SQLException as e:
print("> ", e)
pass
except:
traceback.print_exc()
break
print()
###Output
'select O6fo,-977091.1,-36.46 from inventory'
> Invalid WHERE ('(O6fo,-977091.1,-36.46)')
'select g3 from inventory where -3.0!=V/g/b+Q*M*G'
> Invalid WHERE ('(-3.0!=V/g/b+Q*M*G)')
'update inventory set z=a,x=F_,Q=K where p(M)<_*S'
> Column ('z') was not found
'update inventory set R=L5pk where e*l*y-u>K+U(:)'
> Column ('R') was not found
'select _/d*Q+H/d(k)<t+M-A+P from inventory'
> Invalid WHERE ('(_/d*Q+H/d(k)<t+M-A+P)')
'select F5 from inventory'
> Invalid WHERE ('(F5)')
'update inventory set jWh.=a6 where wcY(M)>IB7(i)'
> Column ('jWh.') was not found
'update inventory set U=y where L(W<c,(U!=W))<V(((q)==m<F),O,l)'
> Column ('U') was not found
'delete from inventory where M/b-O*h*E<H-W>e(Y)-P'
> Invalid WHERE ('M/b-O*h*E<H-W>e(Y)-P')
'select ((kP(86)+b*S+J/Z/U+i(U))) from inventory'
> Invalid WHERE ('(((kP(86)+b*S+J/Z/U+i(U))))')
###Markdown
Fuzzing does not seem to have triggered any crashes. However, are crashes the only errors that we should be worried about? The Evil of EvalIn our database implementation – notably in the `expression_clause()` method -, we have made use of `eval()` to evaluate expressions using the Python interpreter. This allows us to unleash the full power of Python expressions within our SQL statements.
###Code
db.sql('select year from inventory where year < 2000')
###Output
_____no_output_____
###Markdown
In the above query, the clause `year < 2000` is evaluated using `expression_clause()` using Python in the context of each row; hence, `year < 2000` evaluates to either `True` or `False`. The same holds for the expressions being `select`ed:
###Code
db.sql('select year - 1900 if year < 2000 else year - 2000 from inventory')
###Output
_____no_output_____
###Markdown
This works because `year - 1900 if year < 2000 else year - 2000` is a valid Python expression. (It is not a valid SQL expression, though.) The problem with the above is that there is _no limitation_ to what the Python expression can do. What if the user tries the following?
###Code
db.sql('select __import__("os").popen("pwd").read() from inventory')
###Output
_____no_output_____
###Markdown
The above statement effectively reads from the users' file system. Instead of `os.popen("pwd").read()`, it could execute arbitrary Python commands – to access data, install software, run a background process. This is where "the full power of Python expressions" turns back on us. What we want is to allow our _program_ to make full use of its power; yet, the _user_ (or any third party) should not be entrusted to do the same. Hence, we need to differentiate between (trusted) _input from the program_ and (untrusted) _input from the user_. One method that allows such differentiation is that of *dynamic taint analysis*. The idea is to identify the functions that accept user input as *sources* that *taint* any string that comes in through them, and those functions that perform dangerous operations as *sinks*. Finally we bless certain functions as *taint sanitizers*. The idea is that an input from the source should never reach the sink without undergoing sanitization first. This allows us to use a stronger oracle than simply checking for crashes. Tracking String TaintsThere are various levels of taint tracking that one can perform. The simplest is to track that a string fragment originated in a specific environment, and has not undergone a taint removal process. For this, we simply need to wrap the original string with an environment identifier (the _taint_) with `tstr`, and produce `tstr` instances on each operation that results in another string fragment. The attribute `taint` holds a label identifying the environment this instance was derived. A Class for Tainted StringsFor capturing information flows we need a new string class. The idea is to use the new tainted string class `tstr` as a wrapper on the original `str` class. However, `str` is an *immutable* class. Hence, it does not call its `__init__()` method after being constructed. This means that any subclasses of `str` also will not get the `__init__()` method called. If we want to get our initialization routine called, we need to [hook into `__new__()`](https://docs.python.org/3/reference/datamodel.htmlbasic-customization) and return an instance of our own class. We combine this with our initialization code in `__init__()`.
###Code
class tstr(str):
"""Wrapper for strings, saving taint information"""
def __new__(cls, value, *args, **kw):
"""Create a tstr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `tstr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings."""
self.taint: Any = taint
class tstr(tstr):
def __repr__(self) -> tstr:
"""Return a representation."""
return tstr(str.__repr__(self), taint=self.taint)
class tstr(tstr):
def __str__(self) -> str:
"""Convert to string"""
return str.__str__(self)
###Output
_____no_output_____
###Markdown
For example, if we wrap `"hello"` in `tstr`, then we should be able to access its taint:
###Code
thello: tstr = tstr('hello', taint='LOW')
thello.taint
repr(thello).taint # type: ignore
###Output
_____no_output_____
###Markdown
By default, when we wrap a string, it is tainted. Hence we also need a way to clear the taint in the string. One way is to simply return a `str` instance as above. However, one may sometimes wish to remove the taint from an existing instance. This is accomplished with `clear_taint()`. During `clear_taint()`, we simply set the taint to `None`. This method comes with a pair method `has_taint()` which checks whether a `tstr` instance is currently origined.
###Code
class tstr(tstr):
def clear_taint(self):
"""Remove taint"""
self.taint = None
return self
def has_taint(self):
"""Check if taint is present"""
return self.taint is not None
###Output
_____no_output_____
###Markdown
String OperatorsTo propagate the taint, we have to extend string functions, such as operators. We can do so in one single big step, overloading all string methods and operators. When we create a new string from an existing tainted string, we propagate its taint.
###Code
class tstr(tstr):
def create(self, s):
return tstr(s, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_str_wrapper()` function creates a wrapper around an existing string method which attaches the taint to the result of the method:
###Code
class tstr(tstr):
@staticmethod
def make_str_wrapper(fun):
"""Make `fun` (a `str` method) a method in `tstr`"""
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
return self.create(res)
if hasattr(fun, '__doc__'):
# Copy docstring
proxy.__doc__ = fun.__doc__
return proxy
###Output
_____no_output_____
###Markdown
We do this for all string methods that return a string:
###Code
def informationflow_init_1():
for name in ['__format__', '__mod__', '__rmod__', '__getitem__',
'__add__', '__mul__', '__rmul__',
'capitalize', 'casefold', 'center', 'encode',
'expandtabs', 'format', 'format_map', 'join',
'ljust', 'lower', 'lstrip', 'replace',
'rjust', 'rstrip', 'strip', 'swapcase', 'title', 'translate', 'upper']:
fun = getattr(str, name)
setattr(tstr, name, tstr.make_str_wrapper(fun))
informationflow_init_1()
INITIALIZER_LIST = [informationflow_init_1]
def initialize():
for fn in INITIALIZER_LIST:
fn()
###Output
_____no_output_____
###Markdown
The one missing operator is `+` with a regular string on the left side and a tainted string on the right side. Python supports a `__radd__()` method which is invoked if the associated object is used on the right side of an addition.
###Code
class tstr(tstr):
def __radd__(self, value):
"""Return value + self, as a `tstr` object"""
return self.create(value + str(self))
###Output
_____no_output_____
###Markdown
With this, we are already done. Let us create a string `thello` with a taint `LOW`.
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
Now, any substring will also be tainted:
###Code
thello[0].taint # type: ignore
thello[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
String additions will return a `tstr` object with the taint:
###Code
(tstr('foo', taint='HIGH') + 'bar').taint # type: ignore
###Output
_____no_output_____
###Markdown
Our `__radd__()` method ensures this also works if the `tstr` occurs on the right side of a string addition:
###Code
('foo' + tstr('bar', taint='HIGH')).taint # type: ignore
thello += ', world' # type: ignore
thello.taint # type: ignore
###Output
_____no_output_____
###Markdown
Other operators such as multiplication also work:
###Code
(thello * 5).taint # type: ignore
('hw %s' % thello).taint # type: ignore
(tstr('hello %s', taint='HIGH') % 'world').taint # type: ignore
###Output
_____no_output_____
###Markdown
Tracking Untrusted InputSo, what can one do with tainted strings? We reconsider the `DB` example. We define a "better" `TrustedDB` which only accepts strings tainted as `"TRUSTED"`.
###Code
class TrustedDB(DB):
def sql(self, s):
assert isinstance(s, tstr), "Need a tainted string"
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
return super().sql(s)
###Output
_____no_output_____
###Markdown
Feeding a string with an "unknown" (i.e., non-existing) trust level will cause `TrustedDB` to fail:
###Code
bdb = TrustedDB(db.db)
from ExpectError import ExpectError
with ExpectError():
bdb.sql("select year from INVENTORY")
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_64351/3935989889.py", line 2, in <cell line: 1>
bdb.sql("select year from INVENTORY")
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_64351/995123203.py", line 3, in sql
assert isinstance(s, tstr), "Need a tainted string"
AssertionError: Need a tainted string (expected)
###Markdown
Additionally any user input would be originally tagged with `"UNTRUSTED"` as taint. If we place an untrusted string into our better calculator, it will also fail:
###Code
bad_user_input = tstr('__import__("os").popen("ls").read()', taint='UNTRUSTED')
with ExpectError():
bdb.sql(bad_user_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_64351/3307042773.py", line 3, in <cell line: 2>
bdb.sql(bad_user_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_64351/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
Hence, somewhere along the computation, we have to turn the "untrusted" inputs into "trusted" strings. This process is called *sanitization*. A simple sanitization function for our purposes could ensure that the input consists only of few allowed characters (not including letters or quotes); if this is the case, then the input gets a new `"TRUSTED"` taint. If not, we turn the string into an (untrusted) empty string; other alternatives would be to raise an error or to escape or delete "untrusted" characters.
###Code
import re
def sanitize(user_input):
assert isinstance(user_input, tstr)
if re.match(
r'^select +[-a-zA-Z0-9_, ()]+ from +[-a-zA-Z0-9_, ()]+$', user_input):
return tstr(user_input, taint='TRUSTED')
else:
return tstr('', taint='UNTRUSTED')
good_user_input = tstr("select year,model from inventory", taint='UNTRUSTED')
sanitized_input = sanitize(good_user_input)
sanitized_input
sanitized_input.taint
bdb.sql(sanitized_input)
###Output
_____no_output_____
###Markdown
Let us now try out our untrusted input:
###Code
sanitized_input = sanitize(bad_user_input)
sanitized_input
sanitized_input.taint
with ExpectError():
bdb.sql(sanitized_input)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_64351/249000876.py", line 2, in <cell line: 1>
bdb.sql(sanitized_input)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_64351/995123203.py", line 4, in sql
assert s.taint == 'TRUSTED', "Need a string with trusted taint"
AssertionError: Need a string with trusted taint (expected)
###Markdown
In a similar fashion, we can prevent SQL and code injections discussed in [the chapter on Web fuzzing](WebFuzzer.ipynb). Taint Aware FuzzingWe can also use tainting to _direct fuzzing to those grammar rules that are likely to generate dangerous inputs._ The idea here is to identify inputs generated by our fuzzer that lead to untrusted execution. First we define the exception to be thrown when a tainted value reaches a dangerous operation.
###Code
class Tainted(Exception):
def __init__(self, v):
self.v = v
def __str__(self):
return 'Tainted[%s]' % self.v
###Output
_____no_output_____
###Markdown
TaintedDBNext, since `my_eval()` is the most dangerous operation in the `DB` class, we define a new class `TaintedDB` that overrides the `my_eval()` to throw an exception whenever an untrusted string reaches this part.
###Code
class TaintedDB(DB):
def my_eval(self, statement, g, l):
if statement.taint != 'TRUSTED':
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
We initialize an instance of `TaintedDB`
###Code
tdb = TaintedDB()
tdb.db = db.db
###Output
_____no_output_____
###Markdown
Then we start fuzzing.
###Code
import traceback
for _ in range(10):
query = gf.fuzz()
print(repr(query))
try:
res = tdb.sql(tstr(query, taint='UNTRUSTED'))
print(repr(res))
except SQLException as e:
pass
except Tainted as e:
print("> ", e)
except:
traceback.print_exc()
break
print()
###Output
'delete from inventory where y/u-l+f/y<Y(c)/A-H*q'
> Tainted[y/u-l+f/y<Y(c)/A-H*q]
"insert into inventory (G,Wmp,sl3hku3) values ('<','?')"
"insert into inventory (d0) values (',_G')"
'select P*Q-w/x from inventory where X<j==:==j*r-f'
> Tainted[(X<j==:==j*r-f)]
'select a>F*i from inventory where Q/I-_+P*j>.'
> Tainted[(Q/I-_+P*j>.)]
'select (V-i<T/g) from inventory where T/r/G<FK(m)/(i)'
> Tainted[(T/r/G<FK(m)/(i))]
'select (((i))),_(S,_)/L-k<H(Sv,R,n,W,Y) from inventory'
> Tainted[((((i))),_(S,_)/L-k<H(Sv,R,n,W,Y))]
'select (N==c*U/P/y),i-e/n*y,T!=w,u from inventory'
> Tainted[((N==c*U/P/y),i-e/n*y,T!=w,u)]
'update inventory set _=B,n=v where o-p*k-J>T'
'select s from inventory where w4g4<.m(_)/_>t'
> Tainted[(w4g4<.m(_)/_>t)]
###Markdown
One can see that `insert`, `update`, `select` and `delete` statements on an existing table lead to taint exceptions. We can now focus on these specific kinds of inputs. However, this is not the only thing we can do. We will see how we can identify specific portions of input that reached tainted execution using character origins in the later sections. But before that, we explore other uses of taints. Preventing Privacy LeaksUsing taints, we can also ensure that secret information does not leak out. We can assign a special taint `"SECRET"` to strings whose information must not leak out:
###Code
secrets = tstr('<Plenty of secret keys>', taint='SECRET')
###Output
_____no_output_____
###Markdown
Accessing any substring of `secrets` will propagate the taint:
###Code
secrets[1:3].taint # type: ignore
###Output
_____no_output_____
###Markdown
Consider the _heartbeat_ security leak from [the chapter on Fuzzing](Fuzzer.ipynb), in which a server would accidentally reply not only the user input sent to it, but also secret memory. If the reply consists only of the user input, there is no taint associated with it:
###Code
user_input = "hello"
reply = user_input
isinstance(reply, tstr)
###Output
_____no_output_____
###Markdown
If, however, the reply contains _any_ part of the secret, the reply will be tainted:
###Code
reply = user_input + secrets[0:5]
reply
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
The output function of our server would now ensure that the data sent back does not contain any secret information:
###Code
def send_back(s):
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
...
with ExpectError():
send_back(reply)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_64351/3747050841.py", line 2, in <cell line: 1>
send_back(reply)
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_64351/3158733057.py", line 2, in send_back
assert not isinstance(s, tstr) and not s.taint == 'SECRET' # type: ignore
AssertionError (expected)
###Markdown
Our `tstr` solution can help to identify information leaks – but it is by no means complete. If we actually take the `heartbeat()` implementation from [the chapter on Fuzzing](Fuzzer.ipynb), we will see that _any_ reply is marked as `SECRET` – even those not even accessing secret memory:
###Code
from Fuzzer import heartbeat
reply = heartbeat('hello', 5, memory=secrets)
reply.taint # type: ignore
###Output
_____no_output_____
###Markdown
Why is this? If we look into the implementation of `heartbeat()`, we will see that it first builds a long string `memory` from the (non-secret) reply and the (secret) memory, before returning the first characters from `memory`.```python Store reply in memory memory = reply + memory[len(reply):]```At this point, the whole memory still is tainted as `SECRET`, _including_ the non-secret part from `reply`. We may be able to circumvent the issue by tagging the `reply` as `PUBLIC` – but then, this taint would be in conflict with the `SECRET` tag of `memory`. What happens if we compose a string from two differently tainted strings?
###Code
thilo = tstr("High", taint='HIGH') + tstr("Low", taint='LOW')
###Output
_____no_output_____
###Markdown
It turns out that in this case, the `__add__()` method takes precedence over the `__radd__()` method, which means that the right-hand `"Low"` string is treated as a regular (non-tainted) string.
###Code
thilo
thilo.taint # type: ignore
###Output
_____no_output_____
###Markdown
We could set up the `__add__()` and other methods with special handling for conflicting taints. However, the way this conflict should be resolved would be highly _application-dependent_:* If we use taints to indicate _privacy levels_, `SECRET` privacy should take precedence over `PUBLIC` privacy. Any combination of a `SECRET`-tainted string and a `PUBLIC`-tainted string thus should have a `SECRET` taint.* If we use taints to indicate _origins_ of information, an `UNTRUSTED` origin should take precedence over a `TRUSTED` origin. Any combination of an `UNTRUSTED`-tainted string and a `TRUSTED`-tainted string thus should have an `UNTRUSTED` taint.Of course, such conflict resolutions can be implemented. But even so, they will not help us in the `heartbeat()` example differentiating secret from non-secret output data. Tracking Individual CharactersFortunately, there is a better, more generic way to solve the above problems. The key to composition of differently tainted strings is to assign taints not only to strings, but actually to every bit of information – in our case, characters. If every character has a taint on its own, a new composition of characters will simply inherit this very taint _per character_. To this end, we introduce a second bit of information named _origin_. Distinguishing various untrusted sources may be accomplished by origining each instance as separate instance (called *colors* in dynamic origin research). You will see an instance of this technique in the chapter on [Grammar Mining](GrammarMiner.ipynb). In this section, we carry *character level* origins. That is, given a fragment that resulted from a portion of the original origined string, one will be able to tell which portion of the input string the fragment was taken from. In essence, each input character index from an origined source gets its own color. More complex origining such as *bitmap origins* are possible where a single character may result from multiple origined character indexes (such as *checksum* operations on strings). We do not consider these in this chapter. A Class for Tracking Character OriginsLet us introduce a class `ostr` which, like `tstr`, carries a taint for each string, and additionally an _origin_ for each character that indicates its source. It is a consecutive number in a particular range (by default, starting with zero) indicating its _position_ within a specific origin.
###Code
class ostr(str):
"""Wrapper for strings, saving taint and origin information"""
DEFAULT_ORIGIN = 0
def __new__(cls, value, *args, **kw):
"""Create an ostr() instance. Used internally."""
return str.__new__(cls, value)
def __init__(self, value: Any, taint: Any = None,
origin: Optional[Union[int, List[int]]] = None, **kwargs) -> None:
"""Constructor.
`value` is the string value the `ostr` object is to be constructed from.
`taint` is an (optional) taint to be propagated to derived strings.
`origin` (optional) is either
- an integer denoting the index of the first character in `value`, or
- a list of integers denoting the origins of the characters in `value`,
"""
self.taint = taint
if origin is None:
origin = ostr.DEFAULT_ORIGIN
if isinstance(origin, int):
self.origin = list(range(origin, origin + len(self)))
else:
self.origin = origin
assert len(self.origin) == len(self)
###Output
_____no_output_____
###Markdown
As with `tstr`, above, we implement methods for conversion into (regular) Python strings:
###Code
class ostr(ostr):
def create(self, s):
return ostr(s, taint=self.taint, origin=self.origin)
class ostr(ostr):
UNKNOWN_ORIGIN = -1
def __repr__(self):
# handle escaped chars
origin = [ostr.UNKNOWN_ORIGIN]
for s, o in zip(str(self), self.origin):
origin.extend([o] * (len(repr(s)) - 2))
origin.append(ostr.UNKNOWN_ORIGIN)
return ostr(str.__repr__(self), taint=self.taint, origin=origin)
class ostr(ostr):
def __str__(self):
return str.__str__(self)
###Output
_____no_output_____
###Markdown
By default, character origins start with `0`:
###Code
othello = ostr('hello')
assert othello.origin == [0, 1, 2, 3, 4]
###Output
_____no_output_____
###Markdown
We can also specify the starting origin as below -- `6..10`
###Code
tworld = ostr('world', origin=6)
assert tworld.origin == [6, 7, 8, 9, 10]
a = ostr("hello\tworld")
repr(a).origin # type: ignore
###Output
_____no_output_____
###Markdown
`str()` returns a `str` instance without origin or taint information:
###Code
assert type(str(othello)) == str
###Output
_____no_output_____
###Markdown
`repr()`, however, keeps the origin information for the original string:
###Code
repr(othello)
repr(othello).origin # type: ignore
###Output
_____no_output_____
###Markdown
Just as with taints, we can clear origins and check whether an origin is present:
###Code
class ostr(ostr):
def clear_taint(self):
self.taint = None
return self
def has_taint(self):
return self.taint is not None
class ostr(ostr):
def clear_origin(self):
self.origin = [self.UNKNOWN_ORIGIN] * len(self)
return self
def has_origin(self):
return any(origin != self.UNKNOWN_ORIGIN for origin in self.origin)
othello = ostr('Hello')
assert othello.has_origin()
othello.clear_origin()
assert not othello.has_origin()
###Output
_____no_output_____
###Markdown
In the remainder of this section, we re-implement various string methods such that they also keep track of origins. If this is too tedious for you, jump right [to the next section](Checking-Origins) which gives a number of usage examples. Excursion: Implementing String Methods CreateWe need to create new substrings that are wrapped in `ostr` objects. However, we also want to allow our subclasses to create their own instances. Hence we again provide a `create()` method that produces a new `ostr` instance.
###Code
class ostr(ostr):
def create(self, res, origin=None):
return ostr(res, taint=self.taint, origin=origin)
othello = ostr('hello', taint='HIGH')
otworld = othello.create('world', origin=6)
otworld.origin
otworld.taint
assert (othello.origin, otworld.origin) == (
[0, 1, 2, 3, 4], [6, 7, 8, 9, 10])
###Output
_____no_output_____
###Markdown
IndexIn Python, indexing is provided through `__getitem__()`. Indexing on positive integers is simple enough. However, it has two additional wrinkles. The first is that, if the index is negative, that many characters are counted from the end of the string which lies just after the last character. That is, the last character has a negative index `-1`
###Code
class ostr(ostr):
def __getitem__(self, key):
res = super().__getitem__(key)
if isinstance(key, int):
key = len(self) + key if key < 0 else key
return self.create(res, [self.origin[key]])
elif isinstance(key, slice):
return self.create(res, self.origin[key])
else:
assert False
ohello = ostr('hello', taint='HIGH')
assert (ohello[0], ohello[-1]) == ('h', 'o')
ohello[0].taint
###Output
_____no_output_____
###Markdown
The other wrinkle is that `__getitem__()` can accept a slice. We discuss this next. SlicesThe Python `slice` operator `[n:m]` relies on the object being an `iterator`. Hence, we define the `__iter__()` method, which returns a custom `iterator`.
###Code
class ostr(ostr):
def __iter__(self):
return ostr_iterator(self)
###Output
_____no_output_____
###Markdown
The `__iter__()` method requires a supporting `iterator` object. The `iterator` is used to save the state of the current iteration, which it does by keeping a reference to the original `ostr`, and the current index of iteration `_str_idx`.
###Code
class ostr_iterator():
def __init__(self, ostr):
self._ostr = ostr
self._str_idx = 0
def __next__(self):
if self._str_idx == len(self._ostr):
raise StopIteration
# calls ostr getitem should be ostr
c = self._ostr[self._str_idx]
assert isinstance(c, ostr)
self._str_idx += 1
return c
###Output
_____no_output_____
###Markdown
Bringing all these together:
###Code
thw = ostr('hello world', taint='HIGH')
thw[0:5]
assert thw[0:5].has_taint()
assert thw[0:5].has_origin()
thw[0:5].taint
thw[0:5].origin
###Output
_____no_output_____
###Markdown
Splits
###Code
def make_split_wrapper(fun):
def proxy(self, *args, **kwargs):
lst = fun(self, *args, **kwargs)
return [self.create(elem) for elem in lst]
return proxy
for name in ['split', 'rsplit', 'splitlines']:
fun = getattr(str, name)
setattr(ostr, name, make_split_wrapper(fun))
othello = ostr('hello world', taint='LOW')
othello == 'hello world'
othello.split()[0].taint # type: ignore
###Output
_____no_output_____
###Markdown
(Exercise for the reader: handle _partitions_, i.e., splitting a string by substrings) ConcatenationIf two origined strings are concatenated together, it may be desirable to transfer the origins from each to the corresponding portion of the resulting string. The concatenation of strings is accomplished by overriding `__add__()`.
###Code
class ostr(ostr):
def __add__(self, other):
if isinstance(other, ostr):
return self.create(str.__add__(self, other),
(self.origin + other.origin))
else:
return self.create(str.__add__(self, other),
(self.origin + [self.UNKNOWN_ORIGIN for i in other]))
###Output
_____no_output_____
###Markdown
Testing concatenations between two `ostr` instances:
###Code
othello = ostr("hello")
otworld = ostr("world", origin=6)
othw = othello + otworld
assert othw.origin == [0, 1, 2, 3, 4, 6, 7, 8, 9, 10] # type: ignore
###Output
_____no_output_____
###Markdown
What if a `ostr` is concatenated with a `str`?
###Code
space = " "
th_w = othello + space + otworld
assert th_w.origin == [
0,
1,
2,
3,
4,
ostr.UNKNOWN_ORIGIN,
ostr.UNKNOWN_ORIGIN,
6,
7,
8,
9,
10]
###Output
_____no_output_____
###Markdown
One wrinkle here is that when adding a `ostr` and a `str`, the user may place the `str` first, in which case, the `__add__()` method will be called on the `str` instance. Not on the `ostr` instance. However, Python provides a solution. If one defines `__radd__()` on the `ostr` instance, that method will be called rather than `str.__add__()`
###Code
class ostr(ostr):
def __radd__(self, other):
origin = other.origin if isinstance(other, ostr) else [
self.UNKNOWN_ORIGIN for i in other]
return self.create(str.__add__(other, self), (origin + self.origin))
###Output
_____no_output_____
###Markdown
We test it out:
###Code
shello = "hello"
otworld = ostr("world")
thw = shello + otworld
assert thw.origin == [ostr.UNKNOWN_ORIGIN] * len(shello) + [0, 1, 2, 3, 4] # type: ignore
###Output
_____no_output_____
###Markdown
These methods: `slicing` and `concatenation` is sufficient to implement other string methods that result in a string, and does not change the character underneath (i.e no case change). Hence, we look at a helper method next. Extract Origin StringGiven a specific input index, the method `x()` extracts the corresponding origined portion from a `ostr`. As a convenience it supports `slices` along with `ints`.
###Code
class ostr(ostr):
class TaintException(Exception):
pass
def x(self, i=0):
"""Extract substring at index/slice `i`"""
if not self.origin:
raise origin.TaintException('Invalid request idx')
if isinstance(i, int):
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j == i]]
elif isinstance(i, slice):
r = range(i.start or 0, i.stop or len(self), i.step or 1)
return [self[p]
for p in [k for k, j in enumerate(self.origin) if j in r]]
thw = ostr('hello world', origin=100)
assert thw.x(101) == ['e']
assert thw.x(slice(101, 105)) == ['e', 'l', 'l', 'o']
###Output
_____no_output_____
###Markdown
Replace The `replace()` method replaces a portion of the string with another.
###Code
class ostr(ostr):
def replace(self, a, b, n=None):
old_origin = self.origin
b_origin = b.origin if isinstance(
b, ostr) else [self.UNKNOWN_ORIGIN] * len(b)
mystr = str(self)
i = 0
while True:
if n and i >= n:
break
idx = mystr.find(a)
if idx == -1:
break
last = idx + len(a)
mystr = mystr.replace(a, b, 1)
partA, partB = old_origin[0:idx], old_origin[last:]
old_origin = partA + b_origin + partB
i += 1
return self.create(mystr, old_origin)
my_str = ostr("aa cde aa")
res = my_str.replace('aa', 'bb')
assert res, res.origin == ('bb', 'cde', 'bb',
[ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN,
2, 3, 4, 5, 6,
ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN])
my_str = ostr("aa cde aa")
res = my_str.replace('aa', ostr('bb', origin=100))
assert (
res, res.origin) == (
('bb cde bb'), [
100, 101, 2, 3, 4, 5, 6, 100, 101])
###Output
_____no_output_____
###Markdown
Split We essentially have to re-implement split operations, and split by space is slightly different from other splits.
###Code
class ostr(ostr):
def _split_helper(self, sep, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = len(sep)
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
first_idx = last_idx + sep_len
return result_list
def _split_space(self, splitted):
result_list = []
last_idx = 0
first_idx = 0
sep_len = 0
for s in splitted:
last_idx = first_idx + len(s)
item = self[first_idx:last_idx]
result_list.append(item)
v = str(self[last_idx:])
sep_len = len(v) - len(v.lstrip(' '))
first_idx = last_idx + sep_len
return result_list
def rsplit(self, sep=None, maxsplit=-1):
splitted = super().rsplit(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
def split(self, sep=None, maxsplit=-1):
splitted = super().split(sep, maxsplit)
if not sep:
return self._split_space(splitted)
return self._split_helper(sep, splitted)
my_str = ostr('ab cdef ghij kl')
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([0, 1], [3, 4, 5, 6], [8, 9, 10, 11], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 15)))
ab, cdef, ghij, kl = my_str.rsplit(sep=' ')
assert(ab.origin, cdef.origin, kl.origin) == ([0, 1], [3, 4, 5, 6], [13, 14])
my_str = ostr('ab cdef ghij kl', origin=100, taint='HIGH')
ab, cdef, ghij, kl = my_str.rsplit()
assert (ab.origin, cdef.origin, ghij.origin,
kl.origin) == ([100, 101], [105, 106, 107, 108], [110, 111, 112, 113],
[118, 119])
my_str = ostr('ab cdef ghij kl', origin=list(range(0, 20)), taint='HIGH')
ab, cdef, ghij, kl = my_str.split()
assert (ab.origin, cdef.origin, kl.origin) == ([0, 1], [5, 6, 7, 8], [18, 19])
assert ab.taint == 'HIGH'
###Output
_____no_output_____
###Markdown
Strip
###Code
class ostr(ostr):
def strip(self, cl=None):
return self.lstrip(cl).rstrip(cl)
def lstrip(self, cl=None):
res = super().lstrip(cl)
i = self.find(res)
return self[i:]
def rstrip(self, cl=None):
res = super().rstrip(cl)
return self[0:len(res)]
my_str1 = ostr(" abc ")
v = my_str1.strip()
assert v, v.origin == ('abc', [2, 3, 4])
my_str1 = ostr(" abc ")
v = my_str1.lstrip()
assert (v, v.origin) == ('abc ', [2, 3, 4, 5, 6])
my_str1 = ostr(" abc ")
v = my_str1.rstrip()
assert (v, v.origin) == (' abc', [0, 1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
Expand Tabs
###Code
class ostr(ostr):
def expandtabs(self, n=8):
parts = self.split('\t')
res = super().expandtabs(n)
all_parts = []
for i, p in enumerate(parts):
all_parts.extend(p.origin)
if i < len(parts) - 1:
l = len(all_parts) % n
all_parts.extend([p.origin[-1]] * l)
return self.create(res, all_parts)
my_s = str("ab\tcd")
my_ostr = ostr("ab\tcd")
v1 = my_s.expandtabs(4)
v2 = my_ostr.expandtabs(4)
assert str(v1) == str(v2)
assert (len(v1), repr(v2), v2.origin) == (6, "'ab cd'", [0, 1, 1, 1, 3, 4])
class ostr(ostr):
def join(self, iterable):
mystr = ''
myorigin = []
sep_origin = self.origin
lst = list(iterable)
for i, s in enumerate(lst):
sorigin = s.origin if isinstance(s, ostr) else [
self.UNKNOWN_ORIGIN] * len(s)
myorigin.extend(sorigin)
mystr += str(s)
if i < len(lst) - 1:
myorigin.extend(sep_origin)
mystr += str(self)
res = super().join(iterable)
assert len(res) == len(mystr)
return self.create(res, myorigin)
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr('').join([v2, v3, v1])
assert (
v4, v4.origin) == (
'cdefab', [
103, 104, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 100, 101])
my_str = ostr("ab cd", origin=100)
(v1, v2), v3 = my_str.split(), 'ef'
assert (v1.origin, v2.origin) == ([100, 101], [103, 104]) # type: ignore
v4 = ostr(',').join([v2, v3, v1])
assert (v4, v4.origin) == ('cd,ef,ab',
[103, 104, 0, ostr.UNKNOWN_ORIGIN, ostr.UNKNOWN_ORIGIN, 0, 100, 101]) # type: ignore
###Output
_____no_output_____
###Markdown
Partitions
###Code
class ostr(ostr):
def partition(self, sep):
partA, sep, partB = super().partition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
def rpartition(self, sep):
partA, sep, partB = super().rpartition(sep)
return (self.create(partA, self.origin[0:len(partA)]),
self.create(sep,
self.origin[len(partA):len(partA) + len(sep)]),
self.create(partB, self.origin[len(partA) + len(sep):]))
###Output
_____no_output_____
###Markdown
Justify
###Code
class ostr(ostr):
def ljust(self, width, fillchar=' '):
res = super().ljust(width, fillchar)
initial = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, [t] * initial + self.origin)
class ostr(ostr):
def rjust(self, width, fillchar=' '):
res = super().rjust(width, fillchar)
final = len(res) - len(self)
if isinstance(fillchar, tstr):
t = fillchar.x()
else:
t = self.UNKNOWN_ORIGIN
return self.create(res, self.origin + [t] * final)
###Output
_____no_output_____
###Markdown
mod
###Code
class ostr(ostr):
def __mod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
s_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = self.find('%s')
assert i >= 0
res = super().__mod__(s)
r_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
class ostr(ostr):
def __rmod__(self, s):
# nothing else implemented for the time being
assert isinstance(s, str)
r_origin = s.origin if isinstance(
s, ostr) else [self.UNKNOWN_ORIGIN] * len(s)
i = s.find('%s')
assert i >= 0
res = super().__rmod__(s)
s_origin = self.origin[:]
r_origin[i:i + 2] = s_origin
return self.create(res, origin=r_origin)
a = ostr('hello %s world', origin=100)
a
(a % 'good').origin
b = 'hello %s world'
c = ostr('bad', origin=10)
(b % c).origin
###Output
_____no_output_____
###Markdown
String methods that do not change origin
###Code
class ostr(ostr):
def swapcase(self):
return self.create(str(self).swapcase(), self.origin)
def upper(self):
return self.create(str(self).upper(), self.origin)
def lower(self):
return self.create(str(self).lower(), self.origin)
def capitalize(self):
return self.create(str(self).capitalize(), self.origin)
def title(self):
return self.create(str(self).title(), self.origin)
a = ostr('aa', origin=100).upper()
a, a.origin
###Output
_____no_output_____
###Markdown
General wrappers These are not strictly needed for operation, but can be useful for tracing.
###Code
def make_basic_str_wrapper(fun): # type: ignore
def proxy(*args, **kwargs):
res = fun(*args, **kwargs)
return res
return proxy
import inspect
import types
def informationflow_init_2():
ostr_members = [name for name, fn in inspect.getmembers(ostr, callable)
if isinstance(fn, types.FunctionType) and fn.__qualname__.startswith('ostr')]
for name, fn in inspect.getmembers(str, callable):
if name not in set(['__class__', '__new__', '__str__', '__init__',
'__repr__', '__getattribute__']) | set(ostr_members):
setattr(ostr, name, make_basic_str_wrapper(fn))
informationflow_init_2()
INITIALIZER_LIST.append(informationflow_init_2)
###Output
_____no_output_____
###Markdown
Methods yet to be translated These methods generate strings from other strings. However, we do not have the right implementations for any of these. Hence these are marked as dangerous until we can generate the right translations.
###Code
def make_str_abort_wrapper(fun):
def proxy(*args, **kwargs):
raise ostr.TaintException(
'%s Not implemented in `ostr`' %
fun.__name__)
return proxy
def informationflow_init_3():
for name, fn in inspect.getmembers(str, callable):
# Omitted 'splitlines' as this is needed for formatting output in
# IPython/Jupyter
if name in ['__format__', 'format_map', 'format',
'__mul__', '__rmul__', 'center', 'zfill', 'decode', 'encode']:
setattr(ostr, name, make_str_abort_wrapper(fn))
informationflow_init_3()
INITIALIZER_LIST.append(informationflow_init_3)
###Output
_____no_output_____
###Markdown
While generating proxy wrappers for string operations can handle most common cases of transmission of information flow, some of the operations involving strings can not be overridden. For example, consider the following. End of Excursion Checking OriginsWith all this implemented, we now have full-fledged `ostr` strings where we can easily check the origin of each and every character. To check whether a string originates from another string, we can convert the origin to a set and resort to standard set operations:
###Code
s = ostr("hello", origin=100)
s[1]
s[1].origin
set(s[1].origin) <= set(s.origin)
t = ostr("world", origin=200)
set(s.origin) <= set(t.origin)
u = s + t + "!"
u.origin
ostr.UNKNOWN_ORIGIN in u.origin
###Output
_____no_output_____
###Markdown
Privacy Leaks RevisitedLet us apply it to see whether we can come up with a satisfactory solution for checking the `heartbeat()` function against information leakage.
###Code
SECRET_ORIGIN = 1000
###Output
_____no_output_____
###Markdown
We define a "secret" that must not leak out:
###Code
secret = ostr('<again, some super-secret input>', origin=SECRET_ORIGIN)
###Output
_____no_output_____
###Markdown
Each and every character in `secret` has an origin starting with `SECRET_ORIGIN`:
###Code
print(secret.origin)
###Output
[1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023, 1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031]
###Markdown
If we now invoke `heartbeat()` with a given string, the origin of the reply should all be `UNKNOWN_ORIGIN` (from the input), and none of the characters should have a `SECRET_ORIGIN`.
###Code
hello_s = heartbeat('hello', 5, memory=secret)
hello_s
assert isinstance(hello_s, ostr)
print(hello_s.origin)
###Output
[-1, -1, -1, -1, -1]
###Markdown
We can verify that the secret did not leak out by formulating appropriate assertions:
###Code
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
_____no_output_____
###Markdown
All assertions pass, again confirming that no secret leaked out. Let us now go and exploit `heartbeat()` to reveal its secrets. As `heartbeat()` is unchanged, it is as vulnerable as it was:
###Code
hello_s = heartbeat('hello', 32, memory=secret)
hello_s
###Output
_____no_output_____
###Markdown
Now, however, the reply _does_ contain secret information:
###Code
assert isinstance(hello_s, ostr)
print(hello_s.origin)
with ExpectError():
assert hello_s.origin == [ostr.UNKNOWN_ORIGIN] * len(hello_s)
with ExpectError():
assert all(origin == ostr.UNKNOWN_ORIGIN for origin in hello_s.origin)
with ExpectError():
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_64351/1577803914.py", line 2, in <cell line: 1>
assert not any(origin >= SECRET_ORIGIN for origin in hello_s.origin)
AssertionError (expected)
###Markdown
We can now integrate these assertions into the `heartbeat()` function, causing it to fail before leaking information. Additionally (or alternatively?), we can also rewrite our output functions not to give out any secret information. We will leave these two exercises for the reader. Taint-Directed FuzzingThe previous _Taint Aware Fuzzing_ was a bit unsatisfactory in that we could not focus on the specific parts of the grammar that led to dangerous operations. We fix that with _taint directed fuzzing_ using `TrackingDB`.The idea here is to track the origins of each character that reaches `eval`. Then, track it back to the grammar nodes that generated it, and increase the probability of using those nodes again. TrackingDBThe `TrackingDB` is similar to `TaintedDB`. The difference is that, if we find that the execution has reached the `my_eval`, we simply raise the `Tainted`.
###Code
class TrackingDB(TaintedDB):
def my_eval(self, statement, g, l):
if statement.origin:
raise Tainted(statement)
try:
return eval(statement, g, l)
except:
raise SQLException('Invalid SQL (%s)' % repr(statement))
###Output
_____no_output_____
###Markdown
Next, we need a specially crafted fuzzer that preserves the taints. TaintedGrammarFuzzerWe define a `TaintedGrammarFuzzer` class that ensures that the taints propagate to the derivation tree. This is similar to the `GrammarFuzzer` from the [chapter on grammar fuzzers](GrammarFuzzer.ipynb) except that the origins and taints are preserved.
###Code
import random
from GrammarFuzzer import GrammarFuzzer
from Parser import canonical
class TaintedGrammarFuzzer(GrammarFuzzer):
def __init__(self,
grammar,
start_symbol=START_SYMBOL,
expansion_switch=1,
log=False):
self.tainted_start_symbol = ostr(
start_symbol, origin=[1] * len(start_symbol))
self.expansion_switch = expansion_switch
self.log = log
self.grammar = grammar
self.c_grammar = canonical(grammar)
self.init_tainted_grammar()
def expansion_cost(self, expansion, seen=set()):
symbols = [e for e in expansion if e in self.c_grammar]
if len(symbols) == 0:
return 1
if any(s in seen for s in symbols):
return float('inf')
return sum(self.symbol_cost(s, seen) for s in symbols) + 1
def fuzz_tree(self):
tree = (self.tainted_start_symbol, [])
nt_leaves = [tree]
expansion_trials = 0
while nt_leaves:
idx = random.randint(0, len(nt_leaves) - 1)
key, children = nt_leaves[idx]
expansions = self.ct_grammar[key]
if expansion_trials < self.expansion_switch:
expansion = random.choice(expansions)
else:
costs = [self.expansion_cost(e) for e in expansions]
m = min(costs)
all_min = [i for i, c in enumerate(costs) if c == m]
expansion = expansions[random.choice(all_min)]
new_leaves = [(token, []) for token in expansion]
new_nt_leaves = [e for e in new_leaves if e[0] in self.ct_grammar]
children[:] = new_leaves
nt_leaves[idx:idx + 1] = new_nt_leaves
if self.log:
print("%-40s" % (key + " -> " + str(expansion)))
expansion_trials += 1
return tree
def fuzz(self):
self.derivation_tree = self.fuzz_tree()
return self.tree_to_string(self.derivation_tree)
###Output
_____no_output_____
###Markdown
We use a specially prepared tainted grammar for fuzzing. We mark each individual definition, each individual rule, and each individual token with a separate origin (we chose a token boundary of 10 here, after inspecting the grammar). This allows us to track exactly which parts of the grammar were involved in the operations we are interested in.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def init_tainted_grammar(self):
key_increment, alt_increment, token_increment = 1000, 100, 10
key_origin = key_increment
self.ct_grammar = {}
for key, val in self.c_grammar.items():
key_origin += key_increment
os = []
for v in val:
ts = []
key_origin += alt_increment
for t in v:
nt = ostr(t, origin=key_origin)
key_origin += token_increment
ts.append(nt)
os.append(ts)
self.ct_grammar[key] = os
# a use tracking grammar
self.ctp_grammar = {}
for key, val in self.ct_grammar.items():
self.ctp_grammar[key] = [(v, dict(use=0)) for v in val]
###Output
_____no_output_____
###Markdown
As before, we initialize the `TrackingDB`
###Code
trdb = TrackingDB(db.db)
###Output
_____no_output_____
###Markdown
Finally, we need to ensure that the taints are preserved, when the tree is converted back to a string. For this, we define the `tainted_tree_to_string()`
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def tree_to_string(self, tree):
symbol, children, *_ = tree
e = ostr('')
if children:
return e.join([self.tree_to_string(c) for c in children])
else:
return e if symbol in self.c_grammar else symbol
###Output
_____no_output_____
###Markdown
We define `update_grammar()` that accepts a set of origins that reached the dangerous operations and the derivation tree of the original string used for fuzzing to update the enhanced grammar.
###Code
class TaintedGrammarFuzzer(TaintedGrammarFuzzer):
def update_grammar(self, origin, dtree):
def update_tree(dtree, origin):
key, children = dtree
if children:
updated_children = [update_tree(c, origin) for c in children]
corigin = set.union(
*[o for (key, children, o) in updated_children])
corigin = corigin.union(set(key.origin))
return (key, children, corigin)
else:
my_origin = set(key.origin).intersection(origin)
return (key, [], my_origin)
key, children, oset = update_tree(dtree, set(origin))
for key, alts in self.ctp_grammar.items():
for alt, o in alts:
alt_origins = set([i for token in alt for i in token.origin])
if alt_origins.intersection(oset):
o['use'] += 1
###Output
_____no_output_____
###Markdown
With these, we are now ready to fuzz.
###Code
def tree_type(tree):
key, children = tree
return (type(key), key, [tree_type(c) for c in children])
tgf = TaintedGrammarFuzzer(INVENTORY_GRAMMAR_F)
x = None
for _ in range(10):
qtree = tgf.fuzz_tree()
query = tgf.tree_to_string(qtree)
assert isinstance(query, ostr)
try:
print(repr(query))
res = trdb.sql(query)
print(repr(res))
except SQLException as e:
print(e)
except Tainted as e:
print(e)
origin = e.args[0].origin
tgf.update_grammar(origin, qtree)
except:
traceback.print_exc()
break
print()
###Output
'select (g!=(9)!=((:)==2==9)!=J)==-7 from inventory'
Tainted[((g!=(9)!=((:)==2==9)!=J)==-7)]
'delete from inventory where ((c)==T)!=5==(8!=Y)!=-5'
Tainted[((c)==T)!=5==(8!=Y)!=-5]
'select (((w==(((X!=------8)))))) from inventory'
Tainted[((((w==(((X!=------8)))))))]
'delete from inventory where ((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))'
Tainted[((.==(-3)!=(((-3))))!=(S==(((n))==Y))!=--2!=N==-----0==--0)!=(((((R))))==((v)))!=((((((------2==Q==-8!=(q)!=(((.!=2))==J)!=(1)!=(((-4!=--5==J!=(((A==.)))))!=(((((0==(P!=((R))!=(((j)))!=7))))==O==K))==(q))==--1==((H)==(t)==s!=-6==((y))==R)!=((H))!=W==--4==(P==(u)==-0)!=O==((-5==-------2!=4!=U))!=-1==((((((R!=-6))))))!=1!=Z)))==(((I)!=((S))!=(-4==s)==(7!=(A))==(s)==p==((_)!=(C))==((w)))))))]
'delete from inventory where ((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))'
Tainted[((2)==T!=-1)==N==(P)==((((((6==a)))))!=8)==(3)!=((---7))]
'delete from inventory where o!=2==---5==3!=t'
Tainted[o!=2==---5==3!=t]
'select (2) from inventory'
Tainted[((2))]
'select _ from inventory'
Tainted[(_)]
'select L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)) from inventory'
Tainted[(L!=(((1!=(Z)==C)!=C))==(((-0==-5==Q!=((--2!=(-0)==((0))==M)==(A))!=(X)!=e==(K==((b)))!=b==9==((((l)!=-7!=4)!=s==G))!=6==((((5==(((v==(((((((a!=d))==0!=4!=(4)==--1==(h)==-8!=(9)==-4)))))!=I!=-4))==v!=(Y==b)))==(a))!=((7)))))))==((4)))]
'delete from inventory where _==(7==(9)!=(---5)==1)==-8'
Tainted[_==(7==(9)!=(---5)==1)==-8]
###Markdown
We can now inspect our enhanced grammar to see how many times each rule was used.
###Code
tgf.ctp_grammar
###Output
_____no_output_____
###Markdown
From here, the idea is to focus on the rules that reached dangerous operations more often, and increase the probability of the values of that kind. The Limits of Taint TrackingWhile our framework can detect information leakage, it is by no means perfect. There are several ways in which taints can get lost and information thus may still leak out. ConversionsWe only track taints and origins through _strings_ and _characters_. If we convert these to numbers (or other data), the information is lost. As an example, consider this function, converting individual characters to numbers and back:
###Code
def strip_all_info(s):
t = ""
for c in s:
t += chr(ord(c))
return t
othello = ostr("Secret")
othello
othello.origin # type: ignore
###Output
_____no_output_____
###Markdown
The taints and origins will not propagate through the number conversion:
###Code
thello_stripped = strip_all_info(thello)
thello_stripped
with ExpectError():
thello_stripped.origin
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_64351/588526133.py", line 2, in <cell line: 1>
thello_stripped.origin
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
This issue could be addressed by extending numbers with taints and origins, just as we did for strings. At some point, however, this will still break down, because as soon as an internal C function in the Python library is reached, the taint will not propagate into and across the C function. (Unless one starts implementing dynamic taints for these, that is.) Internal C libraries As we mentioned before, calls to _internal_ C libraries do not propagate taints. For example, while the following preserves the taints,
###Code
hello = ostr('hello', origin=100)
world = ostr('world', origin=200)
(hello + ' ' + world).origin
###Output
_____no_output_____
###Markdown
a call to a `join` that should be equivalent will fail.
###Code
with ExpectError():
''.join([hello, ' ', world]).origin # type: ignore
###Output
Traceback (most recent call last):
File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_64351/2341342688.py", line 2, in <cell line: 1>
''.join([hello, ' ', world]).origin # type: ignore
AttributeError: 'str' object has no attribute 'origin' (expected)
###Markdown
Implicit Information FlowEven if one could taint all data in a program, there still would be means to break information flow – notably by turning explicit flow into _implicit_ flow, or data flow into _control flow_. Here is an example:
###Code
def strip_all_info_again(s):
t = ""
for c in s:
if c == 'a':
t += 'a'
elif c == 'b':
t += 'b'
elif c == 'c':
t += 'c'
...
###Output
_____no_output_____
###Markdown
With such a function, there is no explicit data flow between the characters in `s` and the characters in `t`; yet, the strings would be identical. This problem frequently occurs in programs that process and manipulate external input. Enforcing TaintingBoth, conversions and implicit information flow are one of several possibilities how taint and origin information get lost. To address the problem, the best solution is to _always assume the worst from untainted strings_:* As it comes to trust, an untainted string should be treated as _possibly untrusted_, and hence not relied upon unless sanitized.* As it comes to privacy, an untainted string should be treated as _possibly secret_, and hence not leaked out.As a consequence, your program should always have two kinds of taints: one for explicitly trusted (or secret) and one for explicitly untrusted (or non-secret). If a taint gets lost along the way, you will may have to restore it from its sources – not unlike the string methods discussed above. The benefit is a trusted application, in which each and every information flow can be checked at runtime, with violations quickly discovered through automated tests. Synopsis This chapter provides two wrappers to Python _strings_ that allow one to track various properties. These include information on the security properties of the input, and information on originating indexes of the input string. Tracking String Taints`tstr` objects are replacements for Python strings that allows to track and check _taints_ – that is, information on from where a string originated. For instance, one can mark strings that originate from third party input with a taint of "LOW", meaning that they have a low security level. The taint is passed in the constructor of a `tstr` object:
###Code
thello = tstr('hello', taint='LOW')
###Output
_____no_output_____
###Markdown
A `tstr` object is fully compatible with original Python strings. For instance, we can index it and access substrings:
###Code
thello[:4]
###Output
_____no_output_____
###Markdown
However, the `tstr` object also stores the taint, which can be accessed using the `taint` attribute:
###Code
thello.taint
###Output
_____no_output_____
###Markdown
The neat thing about taints is that they propagate to all strings derived from the original tainted string.Indeed, any operation from a `tstr` string that results in a string fragment produces another `tstr` object that includes the original taint. For example:
###Code
thello[1:2].taint # type: ignore
###Output
_____no_output_____
###Markdown
`tstr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
from ClassDiagram import display_class_hierarchy
display_class_hierarchy(tstr)
###Output
_____no_output_____
###Markdown
Tracking Character Origins`ostr` objects extend `tstr` objects by not only tracking a taint, but also the originating _indexes_ from the input string, This allows you to exactly track where individual characters came from. Assume you have a long string, which at index 100 contains the password `"joshua1234"`. Then you can save this origin information using an `ostr` as follows:
###Code
secret = ostr("joshua1234", origin=100, taint='SECRET')
###Output
_____no_output_____
###Markdown
The `origin` attribute of an `ostr` provides access to a list of indexes:
###Code
secret.origin
secret.taint
###Output
_____no_output_____
###Markdown
`ostr` objects are compatible with Python strings, except that string operations return `ostr` objects (together with the saved origin an index information). An index of `-1` indicates that the corresponding character has no origin as supplied to the `ostr()` constructor:
###Code
secret_substr = (secret[0:4] + "-" + secret[6:])
secret_substr.taint
secret_substr.origin
###Output
_____no_output_____
###Markdown
`ostr` objects duplicate most `str` methods, as indicated in the class diagram:
###Code
# ignore
display_class_hierarchy(ostr)
###Output
_____no_output_____
###Markdown
Lessons Learned* String-based and character-based taints allow to dynamically track the information flow from input to the internals of a system and back to the output.* Checking taints allows to discover untrusted inputs and information leakage at runtime.* Data conversions and implicit data flow may strip taint information; the resulting untainted strings should be treated as having the worst possible taint.* Taints can be used in conjunction with fuzzing to provide a more robust indication of incorrect behavior than to simply rely on program crashes. Next StepsAn even better alternative to our taint-directed fuzzing is to make use of _symbolic_ techniques that take the semantics of the program under test into account. The chapter on [flow fuzzing](FlowFuzzer.ipynb) introduces these symbolic techniques for the purpose of exploring information flows; the subsequent chapter on [symbolic fuzzing](SymbolicFuzzer.ipynb) then shows how to make full-fledged use of symbolic execution for covering code. Similarly, [search based fuzzing](SearchBasedFuzzer.ipynb) can often provide a cheaper exploration strategy. BackgroundTaint analysis on Python using a library approach as we implemented in this chapter was discussed by Conti et al. \cite{Conti2010}. Exercises Exercise 1: Tainted NumbersIntroduce a class `tint` (for tainted integer) that, like `tstr`, has a taint attribute that gets passed on from `tint` to `tint`. Part 1: CreationImplement the `tint` class such that taints are set:```pythonx = tint(42, taint='SECRET')assert x.taint == 'SECRET'``` **Solution.** This is pretty straightforward, as we can apply the same scheme as for `tstr`:
###Code
class tint(int):
def __new__(cls, value, *args, **kw):
return int.__new__(cls, value)
def __init__(self, value, taint=None, **kwargs):
self.taint = taint
x = tint(42, taint='SECRET')
assert x.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 2: Arithmetic expressionsEnsure that taints get passed along arithmetic expressions; support addition, subtraction, multiplication, and division operators.```pythony = x + 1assert y.taint == 'SECRET'``` **Solution.** As with `tstr`, we implement a `create()` method and a convenience function to quickly define all arithmetic operations:
###Code
class tint(tint):
def create(self, n):
# print("New tint from", n)
return tint(n, taint=self.taint)
###Output
_____no_output_____
###Markdown
The `make_int_wrapper()` function creates a wrapper around an existing `int` method which attaches the taint to the result of the method:
###Code
def make_int_wrapper(fun):
def proxy(self, *args, **kwargs):
res = fun(self, *args, **kwargs)
# print(fun, args, kwargs, "=", repr(res))
return self.create(res)
return proxy
###Output
_____no_output_____
###Markdown
We do this for all arithmetic operators:
###Code
for name in ['__add__', '__radd__', '__mul__', '__rmul__', '__sub__',
'__floordiv__', '__truediv__']:
fun = getattr(int, name)
setattr(tint, name, make_int_wrapper(fun))
x = tint(42, taint='SECRET')
y = x + 1
y.taint # type: ignore
###Output
_____no_output_____
###Markdown
Part 3: Passing taints from integers to stringsConverting a tainted integer into a string (using `repr()`) should yield a tainted string:```pythonx_s = repr(x)assert x_s.taint == 'SECRET'``` **Solution.** We define the string conversion functions such that they return a tainted string (`tstr`):
###Code
class tint(tint):
def __repr__(self) -> tstr:
s = int.__repr__(self)
return tstr(s, taint=self.taint)
class tint(tint):
def __str__(self) -> tstr:
return tstr(int.__str__(self), taint=self.taint)
x = tint(42, taint='SECRET')
x_s = repr(x)
assert isinstance(x_s, tstr)
assert x_s.taint == 'SECRET'
###Output
_____no_output_____
###Markdown
Part 4: Passing taints from strings to integersConverting a tainted object (with a `taint` attribute) to an integer should pass that taint:```pythonpassword = tstr('1234', taint='NOT_EXACTLY_SECRET')x = tint(password)assert x == 1234assert x.taint == 'NOT_EXACTLY_SECRET'``` **Solution.** This can be done by having the `__init__()` constructor check for a `taint` attibute:
###Code
class tint(tint):
def __init__(self, value, taint=None, **kwargs):
if taint is not None:
self.taint = taint
else:
self.taint = getattr(value, 'taint', None)
password = tstr('1234', taint='NOT_EXACTLY_SECRET')
x = tint(password)
assert x == 1234
assert x.taint == 'NOT_EXACTLY_SECRET'
###Output
_____no_output_____ |
tensorflow/linear_functions_tensorflow.ipynb | ###Markdown
Linear functions in TensorFlowThe most common operation in neural networks is calculating the linear combination of inputs, weights, and biases. As a reminder, we can write the output of the linear operation as$$y = xW + b$$Here, $W$ is a matrix of the weights connecting two layers. The output $y$, the input $x$, and the biases $b$ are all vectors. Weights and Bias in TensorFlowThe goal of training a neural network is to modify weights and biases to best predict the labels. In order to use weights and bias, you'll need a Tensor that can be modified. This leaves out `tf.placeholder()` and `tf.constant()`, since those Tensors can't be modified. This is where `tf.Variable` class comes in. `tf.Variable()`
###Code
import tensorflow as tf
x = tf.Variable(5)
###Output
_____no_output_____
###Markdown
The `tf.Variable` class creates a tensor with an initial value that can be modified, much like a normal Python variable. This tensor stores its state in the session, so you must initialize the state of the tensor manually. You'll use the `tf.global_variables_initializer()` function to initialize the state of all the Variable tensors. Initialization
###Code
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
###Output
_____no_output_____
###Markdown
The `tf.global_variables_initializer()` call returns an operation that will initialize all TensorFlow variables from the graph. You call the operation using a session to initialize all the variables as shown above. Using the `tf.Variable` class allows us to change the weights and bias, but an initial value needs to be chosen.Initializing the weights with random numbers from a normal distribution is good practice. Randomizing the weights helps the model from becoming stuck in the same place every time you train it. You'll learn more about this in the next lesson, when you study gradient descent.Similarly, choosing weights from a normal distribution prevents any one weight from overwhelming other weights. You'll use the `tf.truncated_normal()` function to generate random numbers from a normal distribution. `tf.truncated_normal()`
###Code
n_features = 120
n_labels = 5
weights = tf.Variable(tf.truncated_normal((n_features, n_labels)))
###Output
_____no_output_____
###Markdown
The [`tf.truncated_normal()`](https://www.tensorflow.org/api_docs/python/tf/truncated_normal) function returns a tensor of the specified shape (120, 5) filled with random truncated normal values from a normal distribution whose magnitude is no more than 2 standard deviations from the mean.Since the weights are already helping prevent the model from getting stuck, you don't need to randomize the bias. Let's use the simplest solution, setting the bias to 0. tf.zeros()
###Code
n_labels = 5
bias = tf.Variable(tf.zeros(n_labels))
# for example
tf.zeros([3, 4], tf.int32) # [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
###Output
_____no_output_____
###Markdown
The `tf.zeros()` function returns a tensor with all zeros. Linear Classifier QuizHere is a subset of MNIST datasetYou'll be classifying the handwritten numbers 0, 1, and 2 from the MNIST dataset using TensorFlow. The above is a small sample of the data you'll be training on. Notice how some of the 1s are written with a serif at the top and at different angles. The similarities and differences will play a part in shaping the weights of the model.Left: Weights for labeling 0. Middle: Weights for labeling 1. Right: Weights for labeling 2.The images above are trained weights for each label (0, 1, and 2). The weights display the unique properties of each digit they have found. Complete this quiz to train your own weights using the MNIST dataset. InstructionsIn quiz.py.- Implement get_weights to return a tf.Variable of weights- Implement get_biases to return a tf.Variable of biases- Implement xW + b in the linear functionIn sandbox.py- Initialize all weightsSince $xW$ in $xW + b$ is matrix multiplication, you have to use the `tf.matmul()` function instead of `tf.multiply()`. Don't forget that order matters in matrix multiplication, so `tf.matmul(a,b)` is not the same as `tf.matmul(b,a)`.
###Code
# quiz.py
import tensorflow as tf
def get_weights(n_features, n_labels):
"""
Return TensorFlow weights
:param n_features: Number of features
:param n_labels: Number of labels
:return: TensorFlow weights
"""
# TODO: Return weights
weights = tf.Variable(tf.truncated_normal((n_features, n_labels)))
return weights
def get_biases(n_labels):
"""
Return TensorFlow bias
:param n_labels: Number of labels
:return: TensorFlow bias
"""
# TODO: Return biases
bias = tf.Variable(tf.zeros(n_labels))
return bias
def linear(input, w, b):
"""
Return linear function in TensorFlow
:param input: TensorFlow input
:param w: TensorFlow weights
:param b: TensorFlow biases
:return: TensorFlow linear function
"""
# TODO: Linear Function (xW + b)
linear = tf.add(tf.matmul(input, w), b)
return linear
# sandbox.py
from tensorflow.examples.tutorials.mnist import input_data
# from quiz import get_weights, get_biases, linear
def mnist_features_labels(n_labels):
"""
Gets the first <n> labels from the MNIST dataset
:param n_labels: Number of labels to use
:return: Tuple of feature list and label list
"""
mnist_features = []
mnist_labels = []
mnist = input_data.read_data_sets('/datasets/ud730/mnist', one_hot=True)
# In order to make quizzes run faster, we're only looking at 10000 images
for mnist_feature, mnist_label in zip(*mnist.train.next_batch(10000)):
# Add features and labels if it's for the first <n>th labels
if mnist_label[:n_labels].any():
mnist_features.append(mnist_feature)
mnist_labels.append(mnist_label[:n_labels])
return mnist_features, mnist_labels
# Number of features (28*28 image is 784 features)
n_features = 784
# Number of labels
n_labels = 3
# Features and Labels
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# Weights and Biases
w = get_weights(n_features, n_labels)
b = get_biases(n_labels)
# Linear Function xW + b
logits = linear(features, w, b)
# Training data
train_features, train_labels = mnist_features_labels(n_labels)
with tf.Session() as session:
# TODO: Initialize session variables
session.run(tf.global_variables_initializer())
# Softmax
prediction = tf.nn.softmax(logits)
# Cross entropy
# This quantifies how far off the predictions were.
# You'll learn more about this in future lessons.
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
# You'll learn more about this in future lessons.
loss = tf.reduce_mean(cross_entropy)
# Rate at which the weights are changed
# You'll learn more about this in future lessons.
learning_rate = 0.08
# Gradient Descent
# This is the method used to train the model
# You'll learn more about this in future lessons.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: train_features, labels: train_labels})
# Print loss
print('Loss: {}'.format(l))
###Output
_____no_output_____
###Markdown
TensorFlow SoftmaxThe softmax function squashes it's inputs, typically called logits or logit scores, to be between 0 and 1 and also normalizes the outputs such that they all sum to 1. This means the output of the softmax function is equivalent to a categorical probability distribution. It's the perfect function to use as the output activation for a network predicting multiple classes.$$ P(class_i) = \frac{e^{z_i}}{e^{z_1}+\dots+e^{z_n}} $$ TensorFlow SoftmaxWe're using TensorFlow to build neural networks and, appropriately, there's a function for calculating softmax.
###Code
x = tf.nn.softmax([2.0, 1.0, 0.2])
###Output
_____no_output_____
###Markdown
Easy as that! `tf.nn.softmax()` implements the softmax function for you. It takes in logits and returns softmax activations. QuizUse the softmax function in the quiz below to return the softmax of the logits.
###Code
def run():
output = None
logit_data = [2.0, 1.0, 0.1]
logits = tf.placeholder(tf.float32)
softmax = tf.nn.softmax(logits)
with tf.Session() as sess:
output = sess.run(softmax, feed_dict={logits: logit_data})
return output
print (run())
###Output
[ 0.65900117 0.24243298 0.09856589]
|
Notebook-Class-exercises/.ipynb_checkpoints/Step-3-Prepare-Data-Task-3-checkpoint.ipynb | ###Markdown
Data Prep for USA_Facts confirmed cases at county level Read file
###Code
if using_Google_colab:
df_county_population = pd.read_csv('/content/drive/MyDrive/Covid_Project/input/USA_Facts/covid_county_population_usafacts.csv')
if using_Anaconda_on_Mac_or_Linux:
df_county_population = pd.read_csv('../input/USA_Facts/covid_county_population_usafacts.csv')
if using_Anaconda_on_windows:
df_county_population = pd.read_csv(r'..\input\USA_Facts\covid_county_population_usafacts.csv')
df_county_population
###Output
_____no_output_____
###Markdown
Remove rows with 0 in countyFIPS
###Code
df_non_zero_county_population = df_county_population[(df_county_population['countyFIPS'] > 0)].astype({'countyFIPS': str})
df_non_zero_county_population
###Output
_____no_output_____
###Markdown
Compute state population by adding county population
###Code
df_population_by_state = df_non_zero_county_population.groupby(['State']).sum().reset_index()
df_population_by_state
###Output
_____no_output_____
###Markdown
Read partial analytics_base_table with confirmed cases and deaths at state level
###Code
if using_Google_colab:
df_partial_abt_by_state = pd.read_csv('/content/drive/MyDrive/COVID_Project/output/partial_abt_by_state.csv')
if using_Anaconda_on_Mac_or_Linux:
df_partial_abt_by_state = pd.read_csv('../output/partial_abt_by_state.csv')
if using_Anaconda_on_windows:
df_partial_abt_by_state = pd.read_csv(r'..\output\partial_abt_by_state.csv')
df_partial_abt_by_state
###Output
_____no_output_____
###Markdown
Read partial analytics_base_table with confirmed cases and deaths at county level
###Code
if using_Google_colab:
df_partial_abt_by_county = pd.read_csv('/content/drive/MyDrive/COVID_Project/output/partial_abt_by_county.csv')
if using_Anaconda_on_Mac_or_Linux:
df_partial_abt_by_county = pd.read_csv('../output/partial_abt_by_county.csv')
if using_Anaconda_on_windows:
df_partial_abt_by_county = pd.read_csv(r'..\output\partial_abt_by_county.csv')
df_partial_abt_by_county
###Output
_____no_output_____
###Markdown
Merge abt at state level with state population data
###Code
df_partial_abt_by_state_2 = pd.merge(df_partial_abt_by_state, df_population_by_state, on=['State'], suffixes=('', '_DROP'), how='inner').filter(regex='^(?!.*_DROP)')
df_partial_abt_by_state_2
if using_Google_colab:
df_partial_abt_by_state_2.to_csv('/content/drive/MyDrive/Covid Dataset/data_prep/output/partial_abt_by_state_2.csv')
if using_Anaconda_on_Mac_or_Linux:
df_partial_abt_by_state_2.to_csv('../output/partial_abt_by_state_2.csv')
if using_Anaconda_on_windows:
df_partial_abt_by_state_2.to_csv(r'..\output\partial_abt_by_state_2.csv')
###Output
_____no_output_____
###Markdown
Merge abt at county level with county population data---
###Code
df_partial_abt_by_county.astype({'countyFIPS': str})
df_county_population.astype({'countyFIPS': str})
df_partial_abt_by_county_2 = pd.merge(df_partial_abt_by_county, df_county_population, on=['countyFIPS'], suffixes=('', '_DROP'), how='inner').filter(regex='^(?!.*_DROP)')
df_partial_abt_by_county_2
if using_Google_colab:
df_partial_abt_by_county_2.to_csv('/content/drive/MyDrive/Covid Dataset/data_prep/output/partial_abt_by_county_2.csv')
if using_Anaconda_on_Mac_or_Linux:
df_partial_abt_by_county_2.to_csv('../output/partial_abt_by_county_2.csv')
if using_Anaconda_on_windows:
df_partial_abt_by_county_2.to_csv(r'..\output\partial_abt_by_county_2.csv')
###Output
_____no_output_____ |
notebooks/Tests-and-demos.ipynb | ###Markdown
The Viewer
###Code
import numpy as np
import json
import time
import ipywidgets as widgets
from cad_viewer_widget import (
AnimationTrack, CadViewer, show, open_viewer,
get_sidecar, get_sidecars, close_sidecar, close_sidecars, get_default_sidecar, set_default_sidecar
)
from cad_viewer_widget.utils import numpyify
names = ["hexapod", "box", "box1", "boxes", "faces", "edges", "vertices", "box-faces", "box-edges", "box-vertices", "longbox"]
objects = {}
states = {}
for name in names:
with open(f"../examples/{name}.json", "r") as fd:
objects[name] = numpyify(json.load(fd))
with open(f"../examples/{name}-states.json", "r") as fd:
states[name] = json.load(fd)
###Output
_____no_output_____
###Markdown
Cell view
###Code
name = "boxes"
control = "trackball"
#control = "orbit"
cv = show(
objects[name],
states[name],
control=control,
cad_width=750,
tree_width=250,
height=600,
glass=False,
js_debug=False,
collapse=1,
theme="browser",
)
###Output
_____no_output_____
###Markdown
Exports**Modify view before exporting**
###Code
cv.export_png("boxes.png")
cv.export_html()
###Output
_____no_output_____
###Markdown
**Pin as PNG**
###Code
cv.pin_as_png() # same as pressing the pin top right button
###Output
_____no_output_____
###Markdown
Sidecar handling openviewer and add_shapes
###Code
cv1 = open_viewer(
title="CVW 1",
anchor="split-right",
cad_width=700,
tree_width=250,
height=525,
glass=True
)
name = "hexapod"
cv1.add_shapes(
objects[name],
states[name],
cad_width=500,
height=3000,
#tools=False,
#ortho=False,
control="trackball",
axes=False,
axes0=False,
grid=(True,True, False),
ticks=10,
transparent=True,
#black_edges=True,
normal_len=0,
default_edge_color="#707070",
default_opacity=0.5,
ambient_intensity=0.5,
direct_intensity=0.3,
reset_camera=True,
position = (865.4844022079983, -276.23389988421786, 335.21716816984906),
quaternion = (0.43557639340677845, 0.3648618806188253, 0.4409953598863984, 0.6947460731351442),
zoom=0.8,
timeit=False,
zoom_speed=0.5,
pan_speed=0.5,
rotate_speed=1.0,
js_debug=True,
)
name = "boxes"
show(
objects[name],
states[name],
title="CVW 1",
cad_width=800,
height=600,
glass=False,
)
###Output
_____no_output_____
###Markdown
Show command
###Code
name = "boxes"
cv2 = show(
objects[name],
states[name],
title="CVW 2",
anchor="split-right",
height=600,
cad_width=800,
ortho=False,
control="orbit",
axes=True,
grid=(True, False, False),
ticks=40,
transparent=True,
#black_edges=True,
normal_len=2,
default_edge_color="#707070",
default_opacity=0.5,
ambient_intensity=0.5,
direct_intensity=0.3,
)
cv1.close()
get_sidecars()
name = "hexapod"
cv = show(
objects[name],
states[name],
title="CVW 2",
collapse=1,
glass=True,
)
import numpy as np
from cad_viewer_widget import AnimationTrack
horizontal_angle = 25
def intervals(count):
r = [ min(180, (90 + i*(360 // count)) % 360) for i in range(count)]
return r
def times(end, count):
return np.linspace(0, end, count+1)
def vertical(count, end, offset, reverse):
ints = intervals(count)
heights = [round(35 * np.sin(np.deg2rad(x)) - 15, 1) for x in ints]
heights.append(heights[0])
return times(end, count), heights[offset:] + heights[1:offset+1]
def horizontal(end, reverse):
factor = 1 if reverse else -1
return times(end, 4), [0, factor * horizontal_angle, 0, -factor * horizontal_angle, 0]
leg_group = ("left_front", "right_middle", "left_back")
leg_names = ['right_back', 'right_middle', 'right_front', 'left_back', 'left_middle', 'left_front']
for name in leg_names:
# move upper leg
cv.add_track(AnimationTrack(
f"/bottom/{name}",
"rz", *horizontal(4, "middle" in name)
))
# move lower leg
cv.add_track(AnimationTrack(
f"/bottom/{name}/lower",
"rz", *vertical(8, 4, 0 if name in leg_group else 4, "left" in name)
))
cv.animate(3)
cv2.close()
cv2.disposed
get_sidecars()
###Output
_____no_output_____
###Markdown
Use default sidecar
###Code
set_default_sidecar("CVW 1")
name = "edges"
cv = show(
objects[name],
states[name],
height=600,
cad_width=800,
reset_camera=True,
js_debug=True
)
name = "faces"
cv2 = show(
objects[name],
states[name],
height=600,
cad_width=800,
title="CVW 2"
)
name = "vertices"
cv2 = show(
objects[name],
states[name],
reset_camera=False,
title="CVW 2"
)
cv.dump_model()
get_sidecars()
get_sidecar("CVW 1") == cv
get_sidecar("CVW 2") == cv
close_sidecars()
###Output
_____no_output_____
###Markdown
Cell Viewer Handling
###Code
name = "hexapod"
cv3 = show(
objects[name],
states[name],
height=600,
cad_width=800,
control="trackball",
tools=True,
axes=True,
axes0=True,
grid=[True, False, True],
transparent=True,
black_edges=True,
ortho=False,
timeit=True,
# normal_len=5,
)
name = "faces"
cv4 = show(
objects[name],
states[name],
cad_width=400,
height=300,
glass=True,
collapse=2,
pinning=True
)
cv4.remove_ui_elements(["axes", "axes0", "grid", "ortho", "more", "help"])
###Output
_____no_output_____
###Markdown
Camera location handling Trackball controls
###Code
name = "edges"
cv1 = show(
objects[name],
states[name],
height=600,
cad_width=800,
title="Trackball",
reset_camera=True,
js_debug=True
)
cv1.position=(96.5764, -1.7474, 37.7064)
cv1.quaternion=(0.4059, 0.3049, 0.7413, 0.4389)
cv1.zoom=0.6
cv1.target=(6.9493, -11.6226, -12.2272)
###Output
_____no_output_____
###Markdown
**Do not reset camera location**
###Code
name = "faces"
show(
objects[name],
states[name],
title="Trackball",
reset_camera=False
)
###Output
_____no_output_____
###Markdown
**Reset camera location**
###Code
name = "faces"
cv = show(
objects[name],
states[name],
title="Trackball",
reset_camera=True
)
###Output
_____no_output_____
###Markdown
Orbit controls
###Code
cv = open_viewer(
title="Orbit",
cad_width=700,
height=525,
)
###Output
_____no_output_____
###Markdown
**Setting camera location during show will also set the reset location of the camera**
###Code
name = "edges"
show(
objects[name],
states[name],
title="Orbit",
control="orbit",
position=(-43.3, 73.7, -39.3),
zoom=0.5,
reset_camera=True
)
cv.position, cv.quaternion, cv.target, cv.zoom
cv.position = (85, 25, 55)
cv.target = (0,0,0)
cv.zoom = 0.8
cv.position, cv.quaternion, cv.zoom
cv.position, cv.quaternion, cv.target, cv.zoom
###Output
_____no_output_____
###Markdown
**Quaternions with orbit control can be accessed from widget, however, for information only**
###Code
cv.widget.quaternion
###Output
_____no_output_____
###Markdown
Property access
###Code
cv = open_viewer(
title = "Examples",
anchor="right",
cad_width=700,
height=525,
glass=False
)
menu = widgets.Dropdown(
options=names,
value=names[0],
description='Number:',
disabled=False,
)
control = "trackball"
def on_change(change):
if change['type'] == 'change' and change['name'] == 'value':
name = change['new']
show(
objects[name],
states[name],
title="Examples",
control=control,
js_debug=True
)
menu.observe(on_change)
show(
objects[names[0]],
states[names[0]],
title="Examples",
control=control,
# zoom=0.75,
js_debug=True
)
menu
cv.widget.cad_width = 900
cv.widget.tree_width = 300
cv.widget.height = 700
cv.widget.glass = True
cv.widget.cad_width = 700
cv.widget.tree_width = 250
cv.widget.height = 525
cv.widget.glass = False
###Output
_____no_output_____
###Markdown
Widget interaction
###Code
cv.update_states({
'/bottom/bottom_0': (1,0),
'/bottom/top/top_0': [0,1],
})
cv.update_states({
'/bottom/bottom_0': (1,1),
'/bottom/top/top_0': [1,1],
})
cv.widget.collapse = 2
cv.widget.collapse = 1
cv.ambient_intensity = 0.9
cv.direct_intensity = 0.5
cv.ambient_intensity = 0.5
cv.direct_intensity = 0.3
ec = cv.default_edge_color
cv.default_edge_color = "#ff0000"
cv.default_edge_color = ec
cv.grid = [not g for g in cv.widget.grid]
cv.axes = not cv.axes
cv.axes0 = not cv.axes0
cv.transparent = not cv.transparent
cv.black_edges = not cv.black_edges
cv.tools = not cv.tools
cv.ortho = not cv.ortho
cv.grid = [not g for g in cv.widget.grid]
cv.axes = not cv.axes
cv.axes0 = not cv.axes0
cv.transparent = not cv.transparent
cv.black_edges = not cv.black_edges
cv.tools = not cv.tools
cv.glass = not cv.glass
cv.ortho = not cv.ortho
cv.glass = not cv.glass
cv.zoom_speed = 5
cv.pan_speed = 5
cv.rotate_speed = 5
cv.zoom_speed =1
cv.pan_speed =1
cv.rotate_speed =1
cv.last_pick
###Output
_____no_output_____
###Markdown
Clipping handling
###Code
cv.select_clipping()
cv.clip_intersection = not cv.clip_intersection
cv.clip_planes = not cv.clip_planes
cv.clip_value_0 = 10
cv.clip_value_1 = -50
cv.clip_value_2 = 40
cv.clip_normal_0
cv.clip_value_2
cv.clip_normal_0 = (-0.35, -0.35, -0.35)
cv.clip_normal_0 = (-1, 0, 0)
cv.select_tree()
###Output
_____no_output_____
###Markdown
Rotations Trackball Control
###Code
name = "hexapod"
cv = show(
objects[name],
states[name],
control="trackball",
title="Examples",
reset_camera=True,
glass=False
)
for i in range(10):
cv.rotate_x(1)
cv.rotate_y(3)
cv.rotate_z(5)
time.sleep(0.05)
for i in range(10):
cv.rotate_z(-5)
cv.rotate_y(-3)
cv.rotate_x(-1)
time.sleep(0.05)
###Output
_____no_output_____
###Markdown
Orbit control
###Code
name = "hexapod"
cv = show(
objects[name],
states[name],
control="orbit",
title="Examples",
reset_camera=True
)
for i in range(10):
cv.rotate_up(3)
cv.rotate_left(1)
time.sleep(0.05)
for i in range(10):
cv.rotate_left(-1)
cv.rotate_up(-3)
time.sleep(0.05)
###Output
_____no_output_____
###Markdown
Animation
###Code
name = "hexapod"
cv = show(
objects[name],
states[name],
title="Animation",
height=600,
cad_width=800,
control="trackball",
tools=True,
axes=True,
axes0=True,
grid=[True, False, False],
)
import numpy as np
horizontal_angle = 25
leg_names = {
"right_back", "right_middle", "right_front",
"left_back", "left_middle", "left_front",
}
def intervals(count):
r = [ min(180, (90 + i*(360 // count)) % 360) for i in range(count)]
return r
def times(end, count):
return np.linspace(0, end, count+1).tolist()
def vertical(count, end, offset, reverse):
ints = intervals(count)
heights = [round(35 * np.sin(np.deg2rad(x)) - 15, 1) for x in ints]
heights.append(heights[0])
return times(end, count), heights[offset:] + heights[1:offset+1]
def horizontal(end, reverse):
factor = 1 if reverse else -1
return times(end, 4), [0, factor * horizontal_angle, 0, -factor * horizontal_angle, 0]
leg_group = ("left_front", "right_middle", "left_back")
tracks = []
for name in leg_names:
# move upper leg
cv.add_track(AnimationTrack(f"/bottom/{name}", "rz", *horizontal(4, "middle" in name)))
cv.animate(3)
cv.play()
cv.stop()
for name in leg_names:
# move lower leg
cv.add_track(AnimationTrack(f"/bottom/{name}/lower", "rz", *vertical(8, 4, 0 if name in leg_group else 4, "left" in name)))
cv.animate(2)
cv.play()
cv.clear_tracks()
close_sidecars()
###Output
_____no_output_____ |
.ipynb_checkpoints/HeroesOfPymoli_starter-checkpoint.ipynb | ###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
purchase
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
from IPython.display import display as disp
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
sales_data = pd.read_csv(file_to_load).dropna()
sales_data.head()
disp(sales_data.groupby("SN").sum().sort_values("Price", ascending=False).head())
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
player_num = sales_data["SN"].nunique()
player_count = pd.DataFrame({"Total Number of Players": [player_num]})
disp(player_count.style.hide_index())
disp(player_count)
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
col = ["Number of Unique Items",
"Average Price",
"Number of Purchases",
"Total Revenue"]
val = [[sales_data["Item ID"].nunique()],
[sales_data["Price"].mean()],
[sales_data["SN"].count()],
[sales_data["Price"].sum()]]
summary_table = pd.DataFrame(dict(zip(col, val))).round(2)
forms = ['{:>,}', '${:>,}', '{:>,}', '${:>,}']
for i, var in enumerate(col):
summary_table[var] = summary_table[var].map(forms[i].format)
disp(summary_table.style.hide_index())
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
genders = sales_data[["Gender", "SN"]].drop_duplicates().groupby("Gender").count()
genders["Gender Percentage"] = (genders["SN"] / player_num).round(4).map('{:>.2%}'.format)
gender_demographics = genders.rename(columns={"SN": "Gender Count"})
disp(gender_demographics)
genders = pd.DataFrame(sales_data.drop_duplicates(subset="SN")["Gender"].value_counts())
genders["Gender Percentage"] = (genders["Gender"] / player_num).round(4).map('{:>8.2%}'.format)
gender_demographics = genders.rename(columns={"Gender": "Gender Count"}).rename_axis("Gender")
disp(gender_demographics)
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "/Users/danvaldes/Desktop/bootcamp/repo/04-Pandas/Homework/Instructions/HeroesOfPymoli/Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
###Output
_____no_output_____
###Markdown
Player Count
###Code
purchase_data.head()
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
total_players = len(purchase_data ["SN"].unique())
total_players
###Output
_____no_output_____
###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data_df = pd.read_csv(file_to_load)
#print table
purchase_data_df.head()
#clean data
purchase_data_df.count()
#drop null rows
purchase_data_df.dropna(how='any')
purchase_data_df.count()
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
#calculating number of players in the DataFrame
total_players = len(purchase_data_df["SN"].unique())
#creating a table for total_players
total_player_summary = pd.DataFrame({"Total Players": [total_players]})
total_player_summary
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#number of unique items
unique_items = len(purchase_data_df["Item ID"].unique())
#average price
average_price = (purchase_data_df["Price"].mean())
#number of purchases
total_purchases = (purchase_data_df["Purchase ID"].count())
#total revenue
total_revenue = (purchase_data_df["Price"].sum())
purchasing_analysis_df = pd.DataFrame(
{'Number of Unique Items': [unique_items],
'Average Price': [average_price],
'Total Number of Purchases': [total_purchases],
'Total Revenue': [total_revenue]}
)
#formating numbers
purchasing_analysis_df.style.format(
{"Average Price":"${:,.2f}",
"Total Revenue":"${:,.2f}"
})
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
#group genders using groupby functionality
gender_group = purchase_data_df.groupby("Gender")
# get total count of genders. used nunique instead of unique for groupby
total_gender = gender_group.nunique()["SN"]
# getting percentage of each genders
percentage_of_players = total_gender / total_players * 100
# Creating Data Frame
gender_demographics = pd.DataFrame(
{"Total Count": total_gender,
"Percentage of Players": percentage_of_players,
})
# Format the data frame with no index name
gender_demographics.index.name = None
# Format the numbers
gender_demographics.sort_values(["Total Count"], ascending = False).style.format({"Percentage of Players":"{:.2f}%"})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#purchase count of genders
purchase_count = gender_group["Purchase ID"].count()
#average purchase prices by gender
average_purchase_price = gender_group["Price"].mean()
#average purchase total by gender
average_purchase_total = gender_group["Price"].sum()
#average purchase per person
average_purchase_person = average_purchase_total/total_gender
#create Data Frame
purchasing_analysis_gender_df = pd.DataFrame(
{"Purchase Count": purchase_count,
"Average Purchase Price": average_purchase_price,
"Total Purchase Value":average_purchase_total,
"Avg Purchase Total per Person": average_purchase_person
})
#index at Gender
purchasing_analysis_gender_df.index.name = "Gender"
#format numbers
purchasing_analysis_gender_df.style.format(
{"Average Purchase Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}",
"Avg Purchase Total per Person":"${:,.2f}"
})
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
#creating bins
bins = [0, 9, 14, 19, 24, 29, 34, 39, 1000]
age_groups= ["<10", "10-14", "15-19", "20-24", "25-29", "30-34", "35-39", "40+"]
purchase_data_df["Age Group"] = pd.cut(purchase_data_df["Age"], bins, labels=age_groups)
#use groupby for new data frame
bins_df = purchase_data_df.groupby("Age Group")
#total players in each age group
age_total = bins_df["SN"].nunique()
#percent of players in each age group
age_percentage = age_total/total_players * 100
#create Data Frame
age_demographics_df = pd.DataFrame(
{"Total Count": age_total,
"Percentage of Players": age_percentage
})
age_demographics_df.index.name = "Age Ranges"
#format numbers
age_demographics_df.style.format({"Percentage of Players": "{:,.2f}%"})
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#purchase count
purchase_age_count = bins_df["Purchase ID"].count()
#average purchase price
average_price_age = bins_df["Price"].mean()
#total purchase value
total_purchase_age = bins_df["Price"].sum()
#average total purchase per Person
average_purchase_age = total_purchase_age/age_total
#create Data Frame
purchasing_analysis_age_df = pd.DataFrame(
{"Purchase Count": purchase_age_count,
"Average Purchase Price": average_price_age,
"Total Purchase Value": total_purchase_age,
"Avg Total Purchase per Person": average_purchase_age
})
purchasing_analysis_age_df.index.name = "Age Ranges"
#format numbers
purchasing_analysis_age_df.style.format(
{"Average Purchase Price": "${:,.2f}",
"Total Purchase Value": "${:,.2f}",
"Avg Total Purchase per Person": "${:,.2f}"
})
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#use groupby
top_spenders = purchase_data_df.groupby("SN")
#purchase count
purchase_spender_count = top_spenders["Purchase ID"].count()
#average purchase price
average_purchase_spender = top_spenders["Price"].mean()
#total purchase value
total_purchase_spender = top_spenders["Price"].sum()
#create data frame
top_spenders_df = pd.DataFrame(
{"Purchase Count": purchase_spender_count,
"Average Purchase Price": average_purchase_spender,
"Total Purchase Value": total_purchase_spender
})
#top_spenders_df.head()
#make sure it's in descending ordern and to ensure that we retrieve the first 5 top spenders
top_spenders_df = top_spenders_df.sort_values(["Total Purchase Value"], ascending=False).head()
#format numbers
top_spenders_df.style.format(
{"Average Purchase Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"
})
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, average item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
#extract columns
popular_items = purchase_data_df[["Item ID", "Item Name", "Price"]]
# use groupby to put Item Id and Item Name together
items_groupby = popular_items.groupby(["Item ID","Item Name"])
# item count
item_count = items_groupby["Price"].count()
# total purchase value
total_value_item =items_groupby["Price"].sum()
# average item price
item_price = total_value_item/item_count
# create Data Fram
popular_items_df = pd.DataFrame(
{"Purchase Count": item_count,
"Item Price": item_price,
"Total Purchase Value":total_value_item
})
#ensure this is going in descending order
popular_items_df = popular_items_df.sort_values(["Purchase Count"], ascending=False).head()
# Format numbers
popular_items_df.style.format(
{"Item Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"
})
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
#ensure this is going in descending order by total purchase value
popular_items_df = popular_items_df.sort_values(["Total Purchase Value"], ascending=False).head()
# Format numbers
popular_items_df.style.format(
{"Item Price":"${:,.2f}",
"Total Purchase Value":"${:,.2f}"
})
###Output
_____no_output_____ |
gensim/docs/notebooks/Topics_and_Transformations.ipynb | ###Markdown
Topics and Transformation Don't forget to set
###Code
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
import tempfile
import os.path
TEMP_FOLDER = tempfile.gettempdir()
print('Folder "{}" will be used to save temporary dictionary and corpus.'.format(TEMP_FOLDER))
###Output
Folder "C:\Users\chaor\AppData\Local\Temp" will be used to save temporary dictionary and corpus.
###Markdown
if you want to see logging events. Transformation interfaceIn the previous tutorial on [Corpora and Vector Spaces](https://radimrehurek.com/gensim/tut1.html), we created a corpus of documents represented as a stream of vectors. To continue, let’s fire up gensim and use that corpus:
###Code
from gensim import corpora, models, similarities
if os.path.isfile(os.path.join(TEMP_FOLDER, 'deerwester.dict')):
dictionary = corpora.Dictionary.load(os.path.join(TEMP_FOLDER, 'deerwester.dict'))
corpus = corpora.MmCorpus(os.path.join(TEMP_FOLDER, 'deerwester.mm'))
print("Used files generated from first tutorial")
else:
print("Please run first tutorial to generate data set")
print(dictionary[0])
print(dictionary[1])
print(dictionary[2])
###Output
human
interface
computer
###Markdown
In this tutorial, I will show how to transform documents from one vector representation into another. This process serves two goals:1. To bring out hidden structure in the corpus, discover relationships between words and use them to describe the documents in a new and (hopefully) more semantic way.1. To make the document representation more compact. This both improves efficiency (new representation consumes less resources) and efficacy (marginal data trends are ignored, noise-reduction). Creating a transformationThe transformations are standard Python objects, typically initialized by means of a training corpus:
###Code
tfidf = models.TfidfModel(corpus) # step 1 -- initialize a model
###Output
_____no_output_____
###Markdown
We used our old corpus from tutorial 1 to initialize (train) the transformation model. Different transformations may require different initialization parameters; in case of TfIdf, the “training” consists simply of going through the supplied corpus once and computing document frequencies of all its features. Training other models, such as Latent Semantic Analysis or Latent Dirichlet Allocation, is much more involved and, consequently, takes much more time.> Note:> Transformations always convert between two specific vector spaces. The same vector space (= the same set of feature ids) must be used for training as well as for subsequent vector transformations. Failure to use the same input feature space, such as applying a different string preprocessing, using different feature ids, or using bag-of-words input vectors where TfIdf vectors are expected, will result in feature mismatch during transformation calls and consequently in either garbage output and/or runtime exceptions.
###Code
doc_bow = [(0, 1), (1, 1)]
print(tfidf[doc_bow]) # step 2 -- use the model to transform vectors
###Output
[(0, 0.7071067811865476), (1, 0.7071067811865476)]
###Markdown
Or to apply a transformation to a whole corpus:
###Code
corpus_tfidf = tfidf[corpus]
for doc in corpus_tfidf:
print(doc)
###Output
[(0, 0.5773502691896257), (1, 0.5773502691896257), (2, 0.5773502691896257)]
[(2, 0.44424552527467476), (3, 0.44424552527467476), (4, 0.3244870206138555), (5, 0.3244870206138555), (6, 0.44424552527467476), (7, 0.44424552527467476)]
[(1, 0.5710059809418182), (4, 0.4170757362022777), (5, 0.4170757362022777), (8, 0.5710059809418182)]
[(0, 0.49182558987264147), (5, 0.7184811607083769), (8, 0.49182558987264147)]
[(4, 0.45889394536615247), (6, 0.6282580468670046), (7, 0.6282580468670046)]
[(9, 1.0)]
[(9, 0.7071067811865475), (10, 0.7071067811865475)]
[(9, 0.5080429008916749), (10, 0.5080429008916749), (11, 0.695546419520037)]
[(3, 0.6282580468670046), (10, 0.45889394536615247), (11, 0.6282580468670046)]
###Markdown
In this particular case, we are transforming the same corpus that we used for training, but this is only incidental. Once the transformation model has been initialized, it can be used on any vectors (provided they come from the same vector space, of course), even if they were not used in the training corpus at all. This is achieved by a process called folding-in for LSA, by topic inference for LDA etc.> Note: > Calling model[corpus] only creates a wrapper around the old corpus document stream – actual conversions are done on-the-fly, during document iteration. We cannot convert the entire corpus at the time of calling corpus_transformed = model[corpus], because that would mean storing the result in main memory, and that contradicts gensim’s objective of memory-indepedence. If you will be iterating over the transformed corpus_transformed multiple times, and the transformation is costly, serialize the resulting corpus to disk first and continue using that.Transformations can also be serialized, one on top of another, in a sort of chain:
###Code
lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2) # initialize an LSI transformation
corpus_lsi = lsi[corpus_tfidf] # create a double wrapper over the original corpus: bow->tfidf->fold-in-lsi
###Output
_____no_output_____
###Markdown
Here we transformed our Tf-Idf corpus via [Latent Semantic Indexing](http://en.wikipedia.org/wiki/Latent_semantic_indexing) into a latent 2-D space (2-D because we set num_topics=2). Now you’re probably wondering: what do these two latent dimensions stand for? Let’s inspect with models.LsiModel.print_topics():
###Code
lsi.print_topics(2)
###Output
_____no_output_____
###Markdown
(the topics are printed to log – see the note at the top of this page about activating logging)It appears that according to LSI, “trees”, “graph” and “minors” are all related words (and contribute the most to the direction of the first topic), while the second topic practically concerns itself with all the other words. As expected, the first five documents are more strongly related to the second topic while the remaining four documents to the first topic:
###Code
for doc in corpus_lsi: # both bow->tfidf and tfidf->lsi transformations are actually executed here, on the fly
print(doc)
lsi.save(os.path.join(TEMP_FOLDER, 'model.lsi')) # same for tfidf, lda, ...
#lsi = models.LsiModel.load(os.path.join(TEMP_FOLDER, 'model.lsi'))
###Output
_____no_output_____
###Markdown
The next question might be: just how exactly similar are those documents to each other? Is there a way to formalize the similarity, so that for a given input document, we can order some other set of documents according to their similarity? Similarity queries are covered in the [next tutorial](https://radimrehurek.com/gensim/tut3.html). Available transformationsGensim implements several popular Vector Space Model algorithms: [Term Frequency * Inverse Document Frequency](http://en.wikipedia.org/wiki/Tf–idf) Tf-Idf expects a bag-of-words (integer values) training corpus during initialization. During transformation, it will take a vector and return another vector of the same dimensionality, except that features which were rare in the training corpus will have their value increased. It therefore converts integer-valued vectors into real-valued ones, while leaving the number of dimensions intact. It can also optionally normalize the resulting vectors to (Euclidean) unit length.
###Code
model = models.TfidfModel(corpus, normalize=True)
###Output
_____no_output_____
###Markdown
[Latent Semantic Indexing, LSI (or sometimes LSA)](http://en.wikipedia.org/wiki/Latent_semantic_indexing) LSI transforms documents from either bag-of-words or (preferrably) TfIdf-weighted space into a latent space of a lower dimensionality. For the toy corpus above we used only 2 latent dimensions, but on real corpora, target dimensionality of 200–500 is recommended as a “golden standard” [1].
###Code
model = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=300)
###Output
_____no_output_____
###Markdown
LSI training is unique in that we can continue “training” at any point, simply by providing more training documents. This is done by incremental updates to the underlying model, in a process called online training. Because of this feature, the input document stream may even be infinite – just keep feeding LSI new documents as they arrive, while using the computed transformation model as read-only in the meanwhile! > Example > > model.add_documents(another_tfidf_corpus) now LSI has been trained on tfidf_corpus + another_tfidf_corpus> lsi_vec = model[tfidf_vec] convert some new document into the LSI space, without affecting the model> model.add_documents(more_documents) tfidf_corpus + another_tfidf_corpus + more_documents> lsi_vec = model[tfidf_vec] See the [gensim.models.lsimodel](https://radimrehurek.com/gensim/models/lsimodel.htmlmodule-gensim.models.lsimodel) documentation for details on how to make LSI gradually “forget” old observations in infinite streams. If you want to get dirty, there are also parameters you can tweak that affect speed vs. memory footprint vs. numerical precision of the LSI algorithm.gensim uses a novel online incremental streamed distributed training algorithm (quite a mouthful!), which I published in [5]. gensim also executes a stochastic multi-pass algorithm from Halko et al. [4] internally, to accelerate in-core part of the computations. See also [Experiments on the English Wikipedia](https://radimrehurek.com/gensim/wiki.html) for further speed-ups by distributing the computation across a cluster of computers. [Random Projections](http://www.cis.hut.fi/ella/publications/randproj_kdd.pdf)RP aim to reduce vector space dimensionality. This is a very efficient (both memory- and CPU-friendly) approach to approximating TfIdf distances between documents, by throwing in a little randomness. Recommended target dimensionality is again in the hundreds/thousands, depending on your dataset.
###Code
model = models.RpModel(corpus_tfidf, num_topics=500)
###Output
_____no_output_____
###Markdown
[Latent Dirichlet Allocation, LDA](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) LDA is yet another transformation from bag-of-words counts into a topic space of lower dimensionality. LDA is a probabilistic extension of LSA (also called multinomial PCA), so LDA’s topics can be interpreted as probability distributions over words. These distributions are, just like with LSA, inferred automatically from a training corpus. Documents are in turn interpreted as a (soft) mixture of these topics (again, just like with LSA).
###Code
model = models.LdaModel(corpus, id2word=dictionary, num_topics=100)
###Output
_____no_output_____
###Markdown
gensim uses a fast implementation of online LDA parameter estimation based on [2], modified to run in distributed mode on a cluster of computers. [Hierarchical Dirichlet Process, HDP](http://jmlr.csail.mit.edu/proceedings/papers/v15/wang11a/wang11a.pdf) HDP is a non-parametric bayesian method (note the missing number of requested topics):
###Code
model = models.HdpModel(corpus, id2word=dictionary)
###Output
_____no_output_____
###Markdown
Topics and Transformation Don't forget to set
###Code
import logging
import os.path
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
###Output
_____no_output_____
###Markdown
if you want to see logging events. Transformation interfaceIn the previous tutorial on [Corpora and Vector Spaces](https://radimrehurek.com/gensim/tut1.html), we created a corpus of documents represented as a stream of vectors. To continue, let’s fire up gensim and use that corpus:
###Code
from gensim import corpora, models, similarities
if (os.path.exists("/tmp/deerwester.dict")):
dictionary = corpora.Dictionary.load('/tmp/deerwester.dict')
corpus = corpora.MmCorpus('/tmp/deerwester.mm')
print("Used files generated from first tutorial")
else:
print("Please run first tutorial to generate data set")
print (dictionary[0])
print (dictionary[1])
print (dictionary[2])
###Output
interface
computer
human
###Markdown
In this tutorial, I will show how to transform documents from one vector representation into another. This process serves two goals:1. To bring out hidden structure in the corpus, discover relationships between words and use them to describe the documents in a new and (hopefully) more semantic way.1. To make the document representation more compact. This both improves efficiency (new representation consumes less resources) and efficacy (marginal data trends are ignored, noise-reduction). Creating a transformationThe transformations are standard Python objects, typically initialized by means of a training corpus:
###Code
tfidf = models.TfidfModel(corpus) # step 1 -- initialize a model
###Output
_____no_output_____
###Markdown
We used our old corpus from tutorial 1 to initialize (train) the transformation model. Different transformations may require different initialization parameters; in case of TfIdf, the “training” consists simply of going through the supplied corpus once and computing document frequencies of all its features. Training other models, such as Latent Semantic Analysis or Latent Dirichlet Allocation, is much more involved and, consequently, takes much more time.> Note:> Transformations always convert between two specific vector spaces. The same vector space (= the same set of feature ids) must be used for training as well as for subsequent vector transformations. Failure to use the same input feature space, such as applying a different string preprocessing, using different feature ids, or using bag-of-words input vectors where TfIdf vectors are expected, will result in feature mismatch during transformation calls and consequently in either garbage output and/or runtime exceptions.
###Code
doc_bow = [(0, 1), (1, 1)]
print(tfidf[doc_bow]) # step 2 -- use the model to transform vectors
###Output
[(0, 0.7071067811865476), (1, 0.7071067811865476)]
###Markdown
Or to apply a transformation to a whole corpus:
###Code
corpus_tfidf = tfidf[corpus]
for doc in corpus_tfidf:
print(doc)
###Output
[(0, 0.5773502691896257), (1, 0.5773502691896257), (2, 0.5773502691896257)]
[(1, 0.44424552527467476), (3, 0.44424552527467476), (4, 0.44424552527467476), (5, 0.44424552527467476), (6, 0.3244870206138555), (7, 0.3244870206138555)]
[(0, 0.5710059809418182), (6, 0.4170757362022777), (7, 0.4170757362022777), (8, 0.5710059809418182)]
[(2, 0.49182558987264147), (6, 0.7184811607083769), (8, 0.49182558987264147)]
[(3, 0.6282580468670046), (4, 0.6282580468670046), (7, 0.45889394536615247)]
[(9, 1.0)]
[(9, 0.7071067811865475), (10, 0.7071067811865475)]
[(9, 0.5080429008916749), (10, 0.5080429008916749), (11, 0.695546419520037)]
[(5, 0.6282580468670046), (10, 0.45889394536615247), (11, 0.6282580468670046)]
###Markdown
In this particular case, we are transforming the same corpus that we used for training, but this is only incidental. Once the transformation model has been initialized, it can be used on any vectors (provided they come from the same vector space, of course), even if they were not used in the training corpus at all. This is achieved by a process called folding-in for LSA, by topic inference for LDA etc.> Note: > Calling model[corpus] only creates a wrapper around the old corpus document stream – actual conversions are done on-the-fly, during document iteration. We cannot convert the entire corpus at the time of calling corpus_transformed = model[corpus], because that would mean storing the result in main memory, and that contradicts gensim’s objective of memory-indepedence. If you will be iterating over the transformed corpus_transformed multiple times, and the transformation is costly, serialize the resulting corpus to disk first and continue using that.Transformations can also be serialized, one on top of another, in a sort of chain:
###Code
lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=2) # initialize an LSI transformation
corpus_lsi = lsi[corpus_tfidf] # create a double wrapper over the original corpus: bow->tfidf->fold-in-lsi
###Output
_____no_output_____
###Markdown
Here we transformed our Tf-Idf corpus via [Latent Semantic Indexing](http://en.wikipedia.org/wiki/Latent_semantic_indexing) into a latent 2-D space (2-D because we set num_topics=2). Now you’re probably wondering: what do these two latent dimensions stand for? Let’s inspect with models.LsiModel.print_topics():
###Code
lsi.print_topics(2)
###Output
_____no_output_____
###Markdown
(the topics are printed to log – see the note at the top of this page about activating logging)It appears that according to LSI, “trees”, “graph” and “minors” are all related words (and contribute the most to the direction of the first topic), while the second topic practically concerns itself with all the other words. As expected, the first five documents are more strongly related to the second topic while the remaining four documents to the first topic:
###Code
for doc in corpus_lsi: # both bow->tfidf and tfidf->lsi transformations are actually executed here, on the fly
print(doc)
lsi.save('/tmp/model.lsi') # same for tfidf, lda, ...
lsi = models.LsiModel.load('/tmp/model.lsi')
###Output
_____no_output_____
###Markdown
The next question might be: just how exactly similar are those documents to each other? Is there a way to formalize the similarity, so that for a given input document, we can order some other set of documents according to their similarity? Similarity queries are covered in the [next tutorial](https://radimrehurek.com/gensim/tut3.html). Available transformationsGensim implements several popular Vector Space Model algorithms: [Term Frequency * Inverse Document Frequency](http://en.wikipedia.org/wiki/Tf–idf) Tf-Idf expects a bag-of-words (integer values) training corpus during initialization. During transformation, it will take a vector and return another vector of the same dimensionality, except that features which were rare in the training corpus will have their value increased. It therefore converts integer-valued vectors into real-valued ones, while leaving the number of dimensions intact. It can also optionally normalize the resulting vectors to (Euclidean) unit length.
###Code
model = models.TfidfModel(corpus, normalize=True)
###Output
_____no_output_____
###Markdown
[Latent Semantic Indexing, LSI (or sometimes LSA)](http://en.wikipedia.org/wiki/Latent_semantic_indexing) LSI transforms documents from either bag-of-words or (preferrably) TfIdf-weighted space into a latent space of a lower dimensionality. For the toy corpus above we used only 2 latent dimensions, but on real corpora, target dimensionality of 200–500 is recommended as a “golden standard” [1].
###Code
model = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=300)
###Output
_____no_output_____
###Markdown
LSI training is unique in that we can continue “training” at any point, simply by providing more training documents. This is done by incremental updates to the underlying model, in a process called online training. Because of this feature, the input document stream may even be infinite – just keep feeding LSI new documents as they arrive, while using the computed transformation model as read-only in the meanwhile! > Example > > model.add_documents(another_tfidf_corpus) now LSI has been trained on tfidf_corpus + another_tfidf_corpus> lsi_vec = model[tfidf_vec] convert some new document into the LSI space, without affecting the model> model.add_documents(more_documents) tfidf_corpus + another_tfidf_corpus + more_documents> lsi_vec = model[tfidf_vec] See the [gensim.models.lsimodel](https://radimrehurek.com/gensim/models/lsimodel.htmlmodule-gensim.models.lsimodel) documentation for details on how to make LSI gradually “forget” old observations in infinite streams. If you want to get dirty, there are also parameters you can tweak that affect speed vs. memory footprint vs. numerical precision of the LSI algorithm.gensim uses a novel online incremental streamed distributed training algorithm (quite a mouthful!), which I published in [5]. gensim also executes a stochastic multi-pass algorithm from Halko et al. [4] internally, to accelerate in-core part of the computations. See also [Experiments on the English Wikipedia](https://radimrehurek.com/gensim/wiki.html) for further speed-ups by distributing the computation across a cluster of computers. [Random Projections](http://www.cis.hut.fi/ella/publications/randproj_kdd.pdf)RP aim to reduce vector space dimensionality. This is a very efficient (both memory- and CPU-friendly) approach to approximating TfIdf distances between documents, by throwing in a little randomness. Recommended target dimensionality is again in the hundreds/thousands, depending on your dataset.
###Code
model = models.RpModel(corpus_tfidf, num_topics=500)
###Output
_____no_output_____
###Markdown
[Latent Dirichlet Allocation, LDA](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) LDA is yet another transformation from bag-of-words counts into a topic space of lower dimensionality. LDA is a probabilistic extension of LSA (also called multinomial PCA), so LDA’s topics can be interpreted as probability distributions over words. These distributions are, just like with LSA, inferred automatically from a training corpus. Documents are in turn interpreted as a (soft) mixture of these topics (again, just like with LSA).
###Code
model = models.LdaModel(corpus, id2word=dictionary, num_topics=100)
###Output
_____no_output_____
###Markdown
gensim uses a fast implementation of online LDA parameter estimation based on [2], modified to run in distributed mode on a cluster of computers. [Hierarchical Dirichlet Process, HDP](http://jmlr.csail.mit.edu/proceedings/papers/v15/wang11a/wang11a.pdf) HDP is a non-parametric bayesian method (note the missing number of requested topics):
###Code
model = models.HdpModel(corpus, id2word=dictionary)
###Output
_____no_output_____ |
Machine-Learning---Washington/course1/week2/Predicting house prices.ipynb | ###Markdown
Fire up graphlab create
###Code
import graphlab
###Output
_____no_output_____
###Markdown
Load some house sales dataDataset is from house sales in King County, the region where the city of Seattle, WA is located.
###Code
sales = graphlab.SFrame('home_data.gl/')
sales
###Output
_____no_output_____
###Markdown
Exploring the data for housing sales The house price is correlated with the number of square feet of living space.
###Code
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="sqft_living", y="price")
###Output
_____no_output_____
###Markdown
Create a simple regression model of sqft_living to price Split data into training and testing. We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
###Code
train_data,test_data = sales.random_split(.8,seed=0)
###Output
_____no_output_____
###Markdown
Build the regression model using only sqft_living as a feature
###Code
sqft_model = graphlab.linear_regression.create(train_data, target='price', features=['sqft_living'])
###Output
PROGRESS: Creating a validation set from 5 percent of training data. This may take a while.
You can set ``validation_set=None`` to disable validation tracking.
###Markdown
Evaluate the simple model
###Code
print test_data['price'].mean()
print sqft_model.evaluate(test_data)
###Output
{'max_error': 4142831.7339548855, 'rmse': 255188.34872549935}
###Markdown
RMSE of about \$255,170! Let's show what our predictions look like Matplotlib is a Python plotting library that is also useful for plotting. You can install it with:'pip install matplotlib'
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(test_data['sqft_living'],test_data['price'],'.',
test_data['sqft_living'],sqft_model.predict(test_data),'-')
###Output
_____no_output_____
###Markdown
Above: blue dots are original data, green line is the prediction from the simple regression.Below: we can view the learned regression coefficients.
###Code
sqft_model.get('coefficients')
###Output
_____no_output_____
###Markdown
Explore other features in the dataTo build a more elaborate model, we will explore using more features.
###Code
my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
sales[my_features].show()
sales.show(view='BoxWhisker Plot', x='zipcode', y='price')
###Output
_____no_output_____
###Markdown
Pull the bar at the bottom to view more of the data. 98039 is the most expensive zip code. Build a regression model with more features
###Code
my_features_model = graphlab.linear_regression.create(train_data,target='price',features=my_features,validation_set=None)
print my_features
###Output
['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
###Markdown
Comparing the results of the simple model with adding more features
###Code
print sqft_model.evaluate(test_data)
print my_features_model.evaluate(test_data)
###Output
{'max_error': 4142831.7339548855, 'rmse': 255188.34872549935}
{'max_error': 3486584.509381705, 'rmse': 179542.4333126903}
###Markdown
The RMSE goes down from \$255,170 to \$179,508 with more features. Apply learned models to predict prices of 3 houses The first house we will use is considered an "average" house in Seattle.
###Code
house1 = sales[sales['id']=='5309101200']
house1
###Output
_____no_output_____
###Markdown
###Code
print house1['price']
print sqft_model.predict(house1)
print my_features_model.predict(house1)
###Output
[721918.9333272863]
###Markdown
In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better. Prediction for a second, fancier houseWe will now examine the predictions for a fancier house.
###Code
house2 = sales[sales['id']=='1925069082']
house2
###Output
_____no_output_____
###Markdown
###Code
print sqft_model.predict(house2)
print my_features_model.predict(house2)
###Output
[1446472.4690774973]
###Markdown
In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house. Last house, super fancyOur last house is a very large one owned by a famous Seattleite.
###Code
bill_gates = {'bedrooms':[8],
'bathrooms':[25],
'sqft_living':[50000],
'sqft_lot':[225000],
'floors':[4],
'zipcode':['98039'],
'condition':[10],
'grade':[10],
'waterfront':[1],
'view':[4],
'sqft_above':[37500],
'sqft_basement':[12500],
'yr_built':[1994],
'yr_renovated':[2010],
'lat':[47.627606],
'long':[-122.242054],
'sqft_living15':[5000],
'sqft_lot15':[40000]}
###Output
_____no_output_____
###Markdown
###Code
print my_features_model.predict(graphlab.SFrame(bill_gates))
###Output
[13749825.525719076]
###Markdown
The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.) Selection of summary statistics
###Code
zip_house = sales[sales['zipcode'] == '98039']
zip_house
print zip_house['price'].mean()
filt_living = sales[(sales['sqft_living'] > 2000) & (sales['sqft_living'] <= 4000)]
filt_living
count_living = filt_living.num_rows()
count_living
total = sales.num_rows()
total
fract = float(count_living) / float(total)
fract
advanced_features = [
'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode',
'condition', # condition of house
'grade', # measure of quality of construction
'waterfront', # waterfront property
'view', # type of view
'sqft_above', # square feet above ground
'sqft_basement', # square feet in basement
'yr_built', # the year built
'yr_renovated', # the year renovated
'lat', 'long', # the lat-long of the parcel
'sqft_living15', # average sq.ft. of 15 nearest neighbors
'sqft_lot15', # average lot size of 15 nearest neighbors
]
train_data,test_data = sales.random_split(.8,seed=0)
ad_features_model = graphlab.linear_regression.create(train_data,target='price',features=advanced_features,validation_set = None)
print ad_features_model.evaluate(test_data)
myfeat = my_features_model.evaluate(test_data)
myfeat
myfeat['rmse']
adfeat = ad_features_model.evaluate(test_data)
adfeat
adfeat['rmse']
error = myfeat['rmse'] - adfeat['rmse']
error
###Output
_____no_output_____ |
day4/10. Astropy - Advanced Tables.ipynb | ###Markdown
Advanced Tables for JWST Table design goals and requirements- Easily mutable container of heterogeneous tabular data- Relatively lightweight yet powerful enough for most needs- Responsive to astronomy community needs - For JWST community - if something is missing, broken, needs improvement then ASK!- Deep integration with Astropy (I/O, units, quantity)- Persistent metadata (column units, table header keywords, formatting)- Support missing data Why doesn't Astropy use Pandas DataFrame?- Easily mutable container of heterogeneous tabular data **(only scalar data)**- **Relatively lightweight** yet powerful enough for most needs- Responsive to **astronomy community** needs- **Deep integration with Astropy (I/O, units, quantity)**- **Persistent metadata (column units, table header keywords, formatting)**- Support missing data: **Pandas will cast ``int`` types to ``float64`` to use ``NaN``** - Large ``int64`` values lose precision - Short int (e.g. ``uint8``) values take 4 times as much memory as ``MaskedColumn`` Nevertheless...We recognize Pandas is very fast, powerful and widely used.*Astropy Project recommendation is to use `astropy.Table` where possible. This especially applies to community packages.* Example: multiband photometry of a field Observations in 5 bands of a single field with 5 "galaxy-like" sources- Assumes basic image reduction and source detection is done.- Could be similar to JWST post-image processing workflow. Key Table concepts to be covered- Basic table structure (dict of independent column objects)- Base column class properties and attributes (flexibility in data elements)- Table mutability and formatting- Database operations: join, grouping, binning, stacking, indexing- Missing data- Mixin columns (Quantity, Time, Coordinates, QTable vs. Table)
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import astropy
from astropy import table
from astropy.table import Column, Table, QTable
from astropy.time import Time
import astropy.units as u
import photutils.datasets
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning)
astropy.__version__
###Output
_____no_output_____
###Markdown
Make table of 5 random gaussians corresponding to fake elliptical galaxy-like sourcesUse a photutils utility function [make_random_gaussians_table](https://photutils.readthedocs.io/en/stable/api/photutils.datasets.make_random_gaussians_table.html).
###Code
n_sources = 5
param_ranges = dict([('flux', [1000, 10000]),
('x_mean', [10, 190]),
('y_mean', [10, 190]),
('x_stddev', [2, 5]),
('y_stddev', [2, 5]),
('theta', [0, np.pi])])
sources = photutils.datasets.make_random_gaussians_table(n_sources, param_ranges, random_state = 1)
sources
###Output
_____no_output_____
###Markdown
Digression: learn a little about the Table and Column objects- `Table` is a **container class** where `Table.columns` is the main table data structure- `Table.columns` is an OrderedDict of columns (`Column`, `MaskedColumn`, or mixin-column)- `Column` class inherits from `np.ndarray`- `MaskedColumn` class inherits from `np.ma.MaskedArray`
###Code
type(sources.columns).__mro__
type(sources.columns['flux']).__mro__
sources.columns['flux'] is sources['flux']
###Output
_____no_output_____
###Markdown
Make a synthetic image for cutouts
###Code
img = photutils.datasets.make_gaussian_sources_image(shape=(200, 200),
source_table=sources)
plt.imshow(img); # Trick: trailing semicolon to suppress output
###Output
_____no_output_____
###Markdown
Make postage-stamp cutouts for each source
###Code
# Make integer columns with rounded representation of source mean position
sources['x0'] = np.round(sources['x_mean']).astype(int)
sources['y0'] = np.round(sources['y_mean']).astype(int)
# Generate list of cutout images around each source
npix = 10
cutouts = [img[y0-npix:y0+npix, x0-npix:x0+npix] for x0, y0 in sources['x0', 'y0']]
# `cutouts` is a list of 2-d ndarrays
# Add the cutouts into table: each element is a 2-d image
# This shows:
# - Table mutability and independent columns
# - Storage of ndarray in each table cell
sources['cutout'] = cutouts
plt.imshow(sources['cutout'][3], interpolation='nearest');
###Output
_____no_output_____
###Markdown
Add a source identifier to the table
###Code
sources['id'] = ['jwst-{}-{}'.format(x0, y0) for x0, y0 in sources['x0', 'y0']]
sources
###Output
_____no_output_____
###Markdown
Formatting: let's be a little fussy about the Table:- Put the 'id' column first- Make the precision of table outputs more reasonable (and beautiful!)- Add units to `flux` and `theta` columns
###Code
# Move the `id` column to be the first column
# (Should Table get a method `move_column` to make this easier?)
sources_id = sources['id']
del sources['id']
sources.add_column(sources_id, index=0)
# Set the output formatting for particular columns
for name in ('flux', 'x_mean', 'y_mean', 'x_stddev', 'y_stddev', 'theta'):
sources[name].format = '.3f' # Could also use '%.3f' or '{:.3f}'
sources['cutout'].format = '.3g'
# Set the unit for flux and theta
sources['flux'].unit = u.electron
sources['theta'].unit = u.rad
sources
###Output
_____no_output_____
###Markdown
Digression: table and column summary information
###Code
sources.info
sources.info('stats')
# You can write your own info specifications!
# You can roll your own custom info!
from astropy.utils.data_info import data_info_factory
mystats = data_info_factory(names=['my_min', 'my_median', 'my_max'],
funcs=[np.min, np.median, np.max])
sources.info(mystats)
###Output
_____no_output_____
###Markdown
Column info: name, dtype, unit, format, description
###Code
sources['theta'].info.description = 'Elliptical gaussian rotation angle'
sources['theta'].info
###Output
_____no_output_____
###Markdown
Make fake observations of these sources in 5 bands 'u', 'b', 'v', 'r', 'k'
###Code
def make_observation(sources, band, flux_mult):
"""
Make fake observation of ``sources`` in a field in ``band``. Apply
``flux_mult`` flux multiplier and some gaussian noise on parameters.
"""
n = len(sources)
out = sources.copy()
# Multiply flux by randomized version of flux_mult
out['flux'] *= flux_mult * np.random.normal(loc=1, scale=0.1, size=n)
# Add 1.0 pixel of position and stddev noise
for name in ('x_mean', 'y_mean'):
out[name] += np.random.normal(loc=0, scale=1.0, size=n)
# Add 0.1 pixel noise to stddev
for name in ('x_stddev', 'y_stddev'):
out[name] += np.random.normal(loc=0, scale=0.1, size=n)
# Add a list that repeats the ``band`` as the second column
out.add_column(Column([band] * n, name='band'), index=1)
# Make integer columns with rounded representation of source mean position
sources['x0'] = np.round(sources['x_mean']).astype(int)
sources['y0'] = np.round(sources['y_mean']).astype(int)
# Generate list of cutout images around each source
img = photutils.datasets.make_gaussian_sources_image(shape=(200, 200), source_table=out)
npix = 10
cutouts = [img[y0-npix:y0+npix, x0-npix:x0+npix] for x0, y0 in sources['x0', 'y0']]
sources['cutout'] = cutouts
return out
sources_list = []
for band, flux_mult in [('u', 0.1),
('b', 0.2),
('v', 0.5),
('r', 1.0),
('k', 1.5)]:
sources_list.append(make_observation(sources, band, flux_mult))
sources_list[0]
# Notice that the formatting and units got inherited into our new tables
# Pretend that there are non-detections in some bands
sources_list[0].remove_rows([1,2,3,4])
sources_list[1].remove_row(1)
sources_list[3].remove_row(0)
###Output
_____no_output_____
###Markdown
Database-like features for more power: vstack, indexing, group and joinOur list of source tables **`sources_list`** has the raw data we need for analysis but is inconvenient. Things we'd like to do:- Find all observations of a particular source- List all sources in a particular band- Compute statistics for a particular source (mean centroid, mean image cutout)- Make a single wide table organized by sourceSee [Table high-level operations](http://docs.astropy.org/en/stable/table/operations.html) for all the details. Stacking
###Code
# Stack the list of tables to create a single table (database) of every source observation.
srcs = table.vstack(sources_list)
srcs
###Output
_____no_output_____
###Markdown
Indexing- Indexing means that supplemental information (an index) is added to the table that allows access to particular elements in `time << O(N)`. - In the case of astropy Table it uses a binary search of an ordered index table `O(log(N))`.
###Code
# Now add a database index on the `id` column. This becomes the 'primary key'.
# In this case it does not need to be unique, though one can declare that an
# index must be unique.
srcs.add_index('id')
# Now access elements with id == 'jwst-27-85'. This should be familiar to Pandas users.
# This returns another Table.
srcs.loc['jwst-27-85']
# Let's make a secondary index to allow slicing the table by band
srcs.add_index('band')
# Get a table of all 'b' band source detections
srcs.loc['band', 'b']
# A special case is if only one table row is selected, in which case
# a Row object is returned. This is convenient for the common use case
# of a table with unique keys.
srcs.loc['band', 'u']
###Output
_____no_output_____
###Markdown
Digression: difference between Row and length=1 Table- Indexing a single element of a table returns a `Row` object which can be used to set or access a column value. This always returns a scalar value.- Indexing a single row slice of a table returns a Table, so accessing a column returns a `Column` object (an array) with a length of 1.- This is consistent with numpy structured arrays and Pandas (`df.ix[0]` vs. `df[0:1]`).
###Code
srcs[0]['band']
srcs[0:1]['band']
###Output
_____no_output_____
###Markdown
Digression: table access performance**``srcs[0:1]['band']``** and **``srcs['band'][0:1]``** give the same output, but performance is very different!
###Code
# This creates an entire new Table object (slow) and then selects one column (fast)
%timeit srcs[0:1]['band']
# This selects a column (fast) and then slices it (fast-ish, creates new Column)
%timeit srcs['band'][0:1]
# For the most performance, drop the `Column` machinery (no metadata) and use a straight numpy array
%timeit np.array(srcs['band'], copy=False)[0:1]
###Output
_____no_output_____
###Markdown
GroupingAstropy `Table` supports the powerful concept of grouping which lets you group the rows into sub-tables which you can then:- [Examine](http://docs.astropy.org/en/stable/table/operations.htmlmanipulating-groups): select and loop over groups- [Aggregate](http://docs.astropy.org/en/stable/table/operations.htmlaggregation): apply a reduction function like np.mean to each group- [Filter](http://docs.astropy.org/en/stable/table/operations.htmlfiltering): select groups by means a selection functionThis is a close cousin to indexing, and if a table is already indexed then creating the grouped version is faster.
###Code
srcs_grouped = srcs.group_by('id')
# srcs_grouped has all the same rows but now ordered by ``id``
srcs_grouped
for src in srcs_grouped.groups:
print(src)
# Now let's make a new table where each row is the mean of all rows in the group
mean_srcs = srcs_grouped.groups.aggregate(np.mean)
mean_srcs
# We can define custom behavior depending on column type or even name
def sources_mean(arr):
if arr.dtype.kind in ('S', 'U'):
out= ', '.join(arr)
elif arr.info.name == 'flux':
# Take the log mean
out = np.exp(np.mean(np.log(arr)))
else:
out = np.mean(arr, axis=0)
return out
mean_srcs = srcs_grouped.groups.aggregate(sources_mean)
mean_srcs
plt.imshow(mean_srcs['cutout'][3], interpolation='nearest')
###Output
_____no_output_____
###Markdown
Digression: BinningA common tool in analysis is to **bin** a table based on some reference value. Examples:- Photometry of a binary star in several bands taken over a span of time which should be binned by orbital phase.- Reducing the sampling density for a table by combining 100 rows at a time.- Unevenly sampled historical data which should binned to four points per year.The common theme in all these cases is to convert the key value array into a new `float`- or `int`-valued array whose values are identical for rows in the same output bin. As an example, generate a fake light curve:
###Code
year = np.linspace(2000.0, 2010.0, 200) # 200 observations over 10 years
period = 1.811
y0 = 2005.2
mag = 14.0 + 1.2 * np.sin(2 * np.pi * (year - y0) / period) + np.random.normal(scale=0.1, size=200)
phase = ((year - y0) / period) % 1.0
dat = Table([year, phase, mag], names=['year', 'phase', 'mag'])
plt.figure(figsize=(8, 2))
plt.subplot(1, 2, 1)
plt.plot(dat['year'], dat['mag'], '.')
plt.xlabel('year')
plt.subplot(1, 2, 2)
plt.xlabel('phase')
plt.plot(dat['phase'], dat['mag'], '.');
phase_bin = np.trunc(phase / 0.1)
phase_bin[:50]
dat_grouped = dat.group_by(phase_bin)
dat_mean = dat_grouped.groups.aggregate(np.mean)
dat_std = dat_grouped.groups.aggregate(np.std)
plt.figure(figsize=(4, 2))
plt.xlabel('phase')
plt.errorbar(x=dat_mean['phase'], xerr=0.05, y=dat_mean['mag'], yerr=dat_std['mag'], fmt='.');
###Output
_____no_output_____
###Markdown
Join tables to make a single wide table by source `id`- Have one row corresponding to each of the 5 sources- Each row has columns with the 5 band u, b, v, r, k properties- De-duplicate column names by labeling as `{colname}_{band}`Because there are non-detections for some bands / sources, the result is a **Masked Table**.
###Code
sources_id = None
for left, right in zip(sources_list[:-1], sources_list[1:]):
sources_id = table.join(left=sources_id or left,
right=right,
keys='id',
join_type='outer',
table_names=[left['band'][0], right['band'][0]])
sources_id
# Inspect a masked element
sources_id.add_index('id')
sources_id.loc['jwst-44-133']['flux_u'] is np.ma.masked
###Output
_____no_output_____
###Markdown
Astropy integration: quantities, units, mixin columns, QTable and all thatA major feature of astropy `Table` is integrated support for:- ``Quantity`` columns that have meaningful units- ``Time`` and ``Coordinate`` columns- Other "mixin columns"Mixin columns are object types that adhere to the mixin protocol and arestored and manipulated **natively** in the table. Example: store a Time object in a table
###Code
t = Table()
t['index'] = [1, 2]
t['time'] = Time(['2001-01-02T12:34:56', '2001-02-03T00:01:02'])
t
# The time column is a bona-fide Time object
t['time']
# In case you don't believe me
t['time'].mjd
###Output
_____no_output_____
###Markdown
Quantity: doesn't Table already support units?We saw in the `sources` table that we can define unit. Aren't we good to go? **No**
###Code
type(sources['theta'])
sources['theta'].unit
###Output
_____no_output_____
###Markdown
**Normal table `Column` class is just carrying `unit` as an attribute.**It is no more special than `description` or `format`:
###Code
t2 = sources['theta'] ** 2
t2.unit
###Output
_____no_output_____
###Markdown
QTable to the rescueAstropy has a `QTable` class for tables that use `Quantity` objects for columns with units.See the [Quantity and QTable](http://docs.astropy.org/en/stable/table/mixin_columns.htmlquantity-and-qtable) section for more details.
###Code
# Let's make `flux` and `theta` be real Quantity objects!
qsources = QTable(sources)
qsources
###Output
_____no_output_____
###Markdown
*The repeated presence of `electron` and `rad` in each Quantity value is a problem that is fixed in 1.3-dev*.
###Code
type(qsources['theta'])
qt2 = qsources['theta'] ** 2
qt2.unit
###Output
_____no_output_____
###Markdown
Summary of `Table` and `QTable`In short, `Table` and `QTable` are **identical in every way except for handling columns with units**:- `Table` uses `Column` for any columns with units (with informational-only unit attribute)- `QTable` uses `Quantity` for any columns with units (with meaningful unit attribute)Use `QTable` in general if you are fully on-board with using `Quantity` and do not deal with much legacy code.Use `Table` if you are using code that is not `Quantity`-aware, OR if you need full missing data support. `Quantity` does not support missing (masked) data. Digression: storing a Pandas Series within Astropy Table
###Code
from astropy.utils.data_info import ParentDtypeInfo
import pandas as pd
class SeriesMixin(pd.Series):
info = ParentDtypeInfo()
s = SeriesMixin((np.arange(5)-2)**2)
pt = Table([s], names=['s'])
pt['s'].info
isinstance(pt['s'], pd.Series)
pt['s'].plot();
###Output
_____no_output_____ |
code/pr3_edge_detection.ipynb | ###Markdown
OAK - logo generation - doc: https://github.com/openaiknowledge/pr3 Import Libraries
###Code
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Embedding, LSTM, Dense, Dropout, Bidirectional
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import regularizers
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow.keras.utils
import tensorflow as tf
import numpy as np
import pandas as pd
import numpy as np
from google.colab import drive
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import os
import PIL
import PIL.Image
import tensorflow_datasets as tfds
import pathlib
import time
from tensorflow.image import sobel_edges
###Output
_____no_output_____
###Markdown
Load Dataset
###Code
BASE_FOLDER = '/content/drive/My Drive/openaiknowledge/pr3/'
DATA = BASE_FOLDER + 'data/1/' #version 1
IMAGES = DATA + "images/"
MODEL = BASE_FOLDER + "model/1/"
IMAGES_GENERATED = IMAGES + "generated/"
drive.mount('/content/drive')
def plot_image(image):
plt.imshow(image, cmap="binary")
plt.axis("off")
def show_image_pil(image_url):
image = PIL.Image.open(image_url)
plot_image(image)
def show_image(image_url):
image = tf.keras.preprocessing.image.load_img(image_url)
plot_image(image)
###Output
_____no_output_____
###Markdown
Preprocessing data
###Code
def normalize_image(image):
return (image - 127.5) / 127.5
def desnormalize_image(image):
return (image * 127.5) + 127.5
def rgb2gray(rgb):
#return np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140])
value = tf.image.rgb_to_grayscale(rgb, name=None)
return tf.reshape(value, list(rgb.shape)[0:2]) #plt needs gray in 2 dimensions
batch_size = 32
img_height = 256 #28 #todo review
img_width = img_height
train_2011_path = IMAGES + "space" #'/content/drive/My Drive/openaiknowledge/pr3/data/1/images/2001'
train_ds_2001 = tf.keras.preprocessing.image_dataset_from_directory(
train_2011_path,
validation_split=0.2,
subset="training",
seed=42,
image_size=(img_height, img_width),
batch_size=batch_size,
smart_resize="True")
#train_images_2001 = train_images.reshape(train_images.shape[0], img_height, img_width, 1).astype('float32')
#train_images_2001 = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
print(train_ds_2001.class_names)
print(type(train_ds_2001))
###Output
<class 'tensorflow.python.data.ops.dataset_ops.BatchDataset'>
###Markdown
Visualize the data Edge detectionBased on: http://thegrimm.net/2017/12/14/tensorflow-image-convolution-edge-detection/https://www.tensorflow.org/api_docs/python/tf/image/sobel_edges Prepare data from dataset
###Code
image_path = IMAGES + "space/" + "hal.jpeg"
image_bytes = tf.io.read_file(image_path)
image = tf.image.decode_image(image_bytes)
image = tf.cast(image, tf.float32)
image = tf.expand_dims(image, 0)
plt.imshow(image[0].numpy().astype("uint8"))
plt.axis("off")
sobel = tf.image.sobel_edges(image)
sobel_y = np.asarray(sobel[0, :, :, :, 0]) # sobel in y-direction
sobel_x = np.asarray(sobel[0, :, :, :, 1]) # sobel in x-direction
plt.imshow(sobel_y[..., 0] / 4 + 0.5, cmap="gray")
plt.axis("off")
plt.imshow(sobel_x[..., 0] / 4 + 0.5,cmap="gray")
plt.axis("off")
plt.savefig('hal_gray.png')
print(sobel.shape)
sample = sobel[0][..., 1] #tf.squeeze(sobel)
path = IMAGES_GENERATED+"hal_generated_gray_sobel_x.png"
path_array = IMAGES_GENERATED+"hal_generated_gray_sobel"
tf.keras.preprocessing.image.save_img(
path, sample, data_format=None, file_format=None, scale=True
)
sample_gray = rgb2gray(sample)
print(sample.shape)
print(sample_gray.shape)
plt.imshow(sample_gray,cmap="gray")
plt.axis("off")
sobel_sum = sobel_x + sobel_y
sobel_sum_gray = rgb2gray(sobel_sum)
print(sobel_sum_gray.shape)
print(type(sobel_sum_gray))
sobel_sum_gray_3 = np.reshape(sobel_sum_gray,[sobel_sum_gray.shape[0],sobel_sum_gray.shape[1],1])
print(sobel_sum_gray_3.shape)
#tf.keras.preprocessing.image.save_img(path, sobel_sum_gray_3, data_format=None, file_format=None, scale=True)
np.save(path_array, sobel_sum_gray)
plt.imshow(sobel_sum_gray,cmap="gray")
plt.axis("off")
test_image = np.load(path_array+".npy")
print(test_image.shape)
plt.imshow(test_image,cmap="gray")
plt.axis("off")
###Output
(225, 225)
|
02_Pytorch_Forecasting_Example_NBeats_with_Tensorboard.ipynb | ###Markdown
Pytorch Forecasting | N-Beats with Tensorboard Install Pytorch Forecasting and import libraries
###Code
!pip install pytorch-forecasting
import pandas as pd
import matplotlib.pyplot as plt
import torch
import pytorch_lightning as pl
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
from pytorch_lightning.loggers import TensorBoardLogger
from pytorch_forecasting import TimeSeriesDataSet, Baseline, NBeats
from pytorch_forecasting.data.examples import generate_ar_data
from pytorch_forecasting.metrics import SMAPE
###Output
_____no_output_____
###Markdown
Dataset generation
###Code
data = generate_ar_data(seasonality=12, timesteps=364, n_series=1, seed=42, trend = 3.0, noise = 0.2)
data.head()
plt.plot(data.time_idx, data.value)
plt.title('Generated data')
plt.xlabel('Time index')
plt.ylabel('Value')
plt.show()
plt.plot(data.value)
plt.xlim(354-80, 354)
plt.title('Generated data')
plt.xlabel('Time index')
plt.ylabel('Value')
plt.show()
###Output
_____no_output_____
###Markdown
Creation of datasets and dataloaders
###Code
# Create dataset and dataloaders
max_encoder_length = 60
max_prediction_length = 20
batch_size = 16
training_cutoff = data["time_idx"].max() - max_prediction_length
context_length = max_encoder_length
prediction_length = max_prediction_length
training = TimeSeriesDataSet(
data[lambda x: x.time_idx <= training_cutoff],
time_idx="time_idx",
target="value",
group_ids=["series"],
time_varying_unknown_reals=["value"],
max_encoder_length=context_length,
max_prediction_length=prediction_length,
)
validation = TimeSeriesDataSet.from_dataset(training, data, min_prediction_idx=training_cutoff + 1)
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0)
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=0)
###Output
_____no_output_____
###Markdown
Calculate Baseline Error
###Code
actuals = torch.cat([y[0] for x, y in iter(val_dataloader)])
baseline_predictions = Baseline().predict(val_dataloader)
SMAPE()(baseline_predictions, actuals)
###Output
_____no_output_____
###Markdown
Train NBeats
###Code
pl.seed_everything(42)
early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=10, verbose=False, mode="min")
lr_logger_callback = LearningRateMonitor()
trainer = pl.Trainer(
max_epochs=100,
gpus=1,
weights_summary="top",
gradient_clip_val=0.1,
callbacks=[early_stop_callback, lr_logger_callback],
limit_train_batches=30,
)
net = NBeats.from_dataset(
training,
learning_rate=0.1,
weight_decay=1e-2,
widths=[32, 512],
backcast_loss_ratio=1.0,
)
###Output
_____no_output_____
###Markdown
Find Optimal Learning Rate
###Code
res = trainer.tuner.lr_find(net, train_dataloader=train_dataloader, val_dataloaders=val_dataloader, min_lr=1e-5)
fig = res.plot(show=True, suggest=True)
fig.show()
###Output
| Name | Type | Params
-----------------------------------------------
0 | loss | MASE | 0
1 | logging_metrics | ModuleList | 0
2 | net_blocks | ModuleList | 1.7 M
-----------------------------------------------
1.7 M Trainable params
0 Non-trainable params
1.7 M Total params
###Markdown
Sometime the red point in the chart does not correspond to the minimim of the loss function.
###Code
print(f"Suggested learning rate: {res.suggestion()}")
# Look at the above char and put here the learning rate which corresponds
# to the minimum of the function. Usually, 0.1 is a good choice.
net.hparams.learning_rate = 0.1
###Output
_____no_output_____
###Markdown
Set final parameters before the training
###Code
net.hparams.log_interval = 10
net.hparams.log_val_interval = 1
###Output
_____no_output_____
###Markdown
Training time
###Code
trainer.fit(
net,
train_dataloader=train_dataloader,
val_dataloaders=val_dataloader,
)
###Output
_____no_output_____
###Markdown
The best model and its performance
###Code
best_model_path = trainer.checkpoint_callback.best_model_path
best_model = NBeats.load_from_checkpoint(best_model_path)
print(best_model_path)
###Output
/content/lightning_logs/version_1/checkpoints/epoch=8-step=143.ckpt
###Markdown
Metrics of the best model
###Code
actuals = torch.cat([y[0] for x, y in iter(val_dataloader)])
predictions = best_model.predict(val_dataloader)
print("MAE: {0:.3}".format((actuals - predictions).abs().mean().item()))
print("MAPE: {0:.3}%".format((100 * (actuals - predictions) / actuals).abs().mean().item()))
raw_predictions, x = best_model.predict(val_dataloader, mode="raw", return_x=True)
best_model.plot_prediction(x, raw_predictions, add_loss_to_title=True);
best_model.plot_interpretation(x, raw_predictions, idx=0);
%load_ext tensorboard
%tensorboard --logdir lightning_logs
###Output
_____no_output_____ |
PythonNotebooks/workshops/201903_Edinburgh/notebooks/4_FileConcatenation.ipynb | ###Markdown
MANDATORY PACKAGES
###Code
import xarray
import glob
import os
###Output
_____no_output_____
###Markdown
FILES PATTERN TO CONCATENATE Let's say we have downloaded 2 consecutive files from a model:
###Code
file_1 = 'global-analysis-forecast-phy-001-024_1552454021202.nc'
file_2 = 'global-analysis-forecast-phy-001-024_1552454378757.nc'
###Output
_____no_output_____
###Markdown
Let's have a look to its coordinates:
###Code
xarray.open_dataset(file_1).coords
xarray.open_dataset(file_2).coords
###Output
_____no_output_____
###Markdown
If we look at its time coordinate, there is only one element in the time coordinate, that means that what we have in hands is just a daily mean.What we do to concatenate 2 consecutive daily means?
###Code
files_path = os.getcwd() #we pressume the files are going to be here, change it if not
print(files_path)
common_pattern = 'global-analysis-forecast-phy-001-024_*.nc'
os.chdir(files_path)
merged_file = xarray.merge([xarray.open_dataset(f) for f in glob.glob(common_pattern)])
merged_file.coords
###Output
_____no_output_____
###Markdown
Yeah! We have it, now, let's save it!:
###Code
merged_file.to_netcdf('global-analysis-forecast-phy-001-024.nc')
###Output
_____no_output_____ |
lessons/0_introduction_to_jupyter_notebooks.ipynb | ###Markdown
Jupyter Notebooks TutorialIn this four-part course, we will constantly use Jupyter Notebooks, so it's important to be comfortable with how they work. This tutorial is meant for you to practice how to run code in this file to help you throughout the course. Also, don't worry about understanding the code as we will learn about what it means on day one. What is a Jupyter Notebook?Jupyter Notebooks are, at their core, environments that allow us to write and **interpret** our code in real time. There are many programming languages that can be used in these notebooks including Python, R, Julia, and Matlab. However, in this course, we will be concentrating on Python. How To Run Python Code In our Jupyter Notebooks, this is how our code will look like:
###Code
1 + 1
###Output
_____no_output_____
###Markdown
To see the result of the code also known as the **output**, we must run it. You have two options:1. You can either click the play button to the left of the block to run the code in Google Colab.2. You can click on the block and press the keys `shift + enter`.As practice, try running the Python code above. The output of the code above should be a **2**. Additional Practice This Python code should output 50:
###Code
5 * 10
###Output
_____no_output_____
###Markdown
This Python code should output 3:
###Code
15 / 3
###Output
_____no_output_____
###Markdown
This Python code should output 90:
###Code
100 - 10
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.